diff --git a/README.md b/README.md index 5f6fd48..700f8b3 100644 --- a/README.md +++ b/README.md @@ -1,1117 +1,10 @@ ``` - 安装过程请:https://www.cnblogs.com/dukuan/p/9856269.html + 安装文档:https://www.cnblogs.com/dukuan ``` -# kubeadm-highavailiability - kubernetes high availiability deployment based on kubeadm, for Kubernetes version v1.11.x/v1.9.x/v1.7.x/v1.6.x -![k8s logo](images/Kubernetes.png) +# 超全面、超详细的Kubernetes视频教程,基于最新K8s进行讲解 +http://www.kubeasy.com/ -- [中文文档(for v1.11.x版本)](README_CN.md) -- [English document(for v1.11.x version)](README.md) -- [中文文档(for v1.9.x版本)](v1.9/README_CN.md) -- [English document(for v1.9.x version)](v1.9/README.md) -- [中文文档(for v1.7.x版本)](v1.7/README_CN.md) -- [English document(for v1.7.x version)](v1.7/README.md) -- [中文文档(for v1.6.x版本)](v1.6/README_CN.md) -- [English document(for v1.6.x version)](v1.6/README.md) - ---- - -- [GitHub project URL](https://github.com/cookeem/kubeadm-ha/) -- [OSChina project URL](https://git.oschina.net/cookeem/kubeadm-ha/) - ---- - -- This operation instruction is for version v1.11.x kubernetes cluster - -> v1.11.x version now support deploy tls etcd cluster in control plane - -### category - -1. [deployment architecture](#deployment-architecture) - 1. [deployment architecture summary](#deployment-architecture-summary) - 1. [detail deployment architecture](#detail-deployment-architecture) - 1. [hosts list](#hosts-list) -1. [prerequisites](#prerequisites) - 1. [version info](#version-info) - 1. [required docker images](#required-docker-images) - 1. [system configuration](#system-configuration) -1. [kubernetes installation](#kubernetes-installation) - 1. [firewalld and iptables settings](#firewalld-and-iptables-settings) - 1. [kubernetes and related services installation](#kubernetes-and-related-services-installation) - 1. [master hosts mutual trust](#master-hosts-mutual-trust) -1. [masters high availiability installation](#masters-high-availiability-installation) - 1. [create configuration files](#create-configuration-files) - 1. [kubeadm initialization](#kubeadm-initialization) - 1. [high availiability configuration](#high-availiability-configuration) -1. [masters load balance settings](#masters-load-balance-settings) - 1. [keepalived installation](#keepalived-installation) - 1. [nginx load balance settings](#nginx-load-balance-settings) - 1. [kube-proxy HA settings](#kube-proxy-ha-settings) - 1. [high availiability verify](#high-availiability-verify) - 1. [kubernetes addons installation](#kubernetes-addons-installation) -1. [workers join kubernetes cluster](#workers-join-kubernetes-cluster) - 1. [workers join HA cluster](#workers-join-ha-cluster) -1. [verify kubernetes cluster installation](#verify-kubernetes-cluster-installation) - 1. [verify kubernetes cluster high availiablity installation](#verify-kubernetes-cluster-high-availiablity-installation) - -### deployment architecture - -#### deployment architecture summary - -![ha logo](images/ha.png) - ---- -[category](#category) - -#### detail deployment architecture - -![k8s ha](images/k8s-ha.png) - -- kubernetes components: - -> kube-apiserver: exposes the Kubernetes API. It is the front-end for the Kubernetes control plane. It is designed to scale horizontally – that is, it scales by deploying more instances. -> etcd: is used as Kubernetes’ backing store. All cluster data is stored here. Always have a backup plan for etcd’s data for your Kubernetes cluster. -> kube-scheduler: watches newly created pods that have no node assigned, and selects a node for them to run on. -> kube-controller-manager: runs controllers, which are the background threads that handle routine tasks in the cluster. Logically, each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process. -> kubelet: is the primary node agent. It watches for pods that have been assigned to its node (either by apiserver or via local configuration file) -> kube-proxy: enables the Kubernetes service abstraction by maintaining network rules on the host and performing connection forwarding. - -- load balancer - -> keepalived cluster config a virtual IP address (192.168.20.10), this virtual IP address point to k8s-master01, k8s-master02, k8s-master03. -> nginx service as the load balancer of k8s-master01, k8s-master02, k8s-master03's apiserver. The other nodes kubernetes services connect the keepalived virtual ip address (192.168.20.10) and nginx exposed port (16443) to communicate with the master cluster's apiservers. - ---- - -[category](#category) - -#### hosts list - -HostName | IPAddress | Notes | Components -:--- | :--- | :--- | :--- -k8s-master01 ~ 03 | 192.168.20.20 ~ 22 | master nodes * 3 | keepalived, nginx, etcd, kubelet, kube-apiserver -k8s-master-lb | 192.168.20.10 | keepalived virtual IP | N/A -k8s-node01 ~ 08 | 192.168.20.30 ~ 37 | worker nodes * 8 | kubelet - ---- - -[category](#category) - -### prerequisites - -#### version info - -- Linux version: CentOS 7.4.1708 - -- Core version: 4.6.4-1.el7.elrepo.x86_64 - -```sh -$ cat /etc/redhat-release -CentOS Linux release 7.4.1708 (Core) - -$ uname -r -4.6.4-1.el7.elrepo.x86_64 -``` - -- docker version: 17.12.0-ce-rc2 - -```sh -$ docker version -Client: - Version: 17.12.0-ce-rc2 - API version: 1.35 - Go version: go1.9.2 - Git commit: f9cde63 - Built: Tue Dec 12 06:42:20 2017 - OS/Arch: linux/amd64 - -Server: - Engine: - Version: 17.12.0-ce-rc2 - API version: 1.35 (minimum version 1.12) - Go version: go1.9.2 - Git commit: f9cde63 - Built: Tue Dec 12 06:44:50 2017 - OS/Arch: linux/amd64 - Experimental: false -``` - -- kubeadm version: v1.11.1 - -```sh -$ kubeadm version -kubeadm version: &version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:50:16Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"} -``` - -- kubelet version: v1.11.1 - -```sh -$ kubelet --version -Kubernetes v1.11.1 -``` - -- networks addons - -> calico - ---- - -[category](#category) - -#### required docker images - -- required docker images and tags - -```sh -# kuberentes basic components - -# use kubeadm to list all required docker images -$ kubeadm config images list --kubernetes-version=v1.11.1 -k8s.gcr.io/kube-apiserver-amd64:v1.11.1 -k8s.gcr.io/kube-controller-manager-amd64:v1.11.1 -k8s.gcr.io/kube-scheduler-amd64:v1.11.1 -k8s.gcr.io/kube-proxy-amd64:v1.11.1 -k8s.gcr.io/pause:3.1 -k8s.gcr.io/etcd-amd64:3.2.18 -k8s.gcr.io/coredns:1.1.3 - -# use kubeadm to pull all required docker images -$ kubeadm config images pull --kubernetes-version=v1.11.1 - -# kubernetes networks addons -$ docker pull quay.io/calico/typha:v0.7.4 -$ docker pull quay.io/calico/node:v3.1.3 -$ docker pull quay.io/calico/cni:v3.1.3 - -# kubernetes metrics server -$ docker pull gcr.io/google_containers/metrics-server-amd64:v0.2.1 - -# kubernetes dashboard -$ docker pull gcr.io/google_containers/kubernetes-dashboard-amd64:v1.8.3 - -# kubernetes heapster -$ docker pull k8s.gcr.io/heapster-amd64:v1.5.4 -$ docker pull k8s.gcr.io/heapster-influxdb-amd64:v1.5.2 -$ docker pull k8s.gcr.io/heapster-grafana-amd64:v5.0.4 - -# kubernetes apiserver load balancer -$ docker pull nginx:latest - -# prometheus -$ docker pull prom/prometheus:v2.3.1 - -# traefik -$ docker pull traefik:v1.6.3 - -# istio -$ docker pull docker.io/jaegertracing/all-in-one:1.5 -$ docker pull docker.io/prom/prometheus:v2.3.1 -$ docker pull docker.io/prom/statsd-exporter:v0.6.0 -$ docker pull gcr.io/istio-release/citadel:1.0.0 -$ docker pull gcr.io/istio-release/galley:1.0.0 -$ docker pull gcr.io/istio-release/grafana:1.0.0 -$ docker pull gcr.io/istio-release/mixer:1.0.0 -$ docker pull gcr.io/istio-release/pilot:1.0.0 -$ docker pull gcr.io/istio-release/proxy_init:1.0.0 -$ docker pull gcr.io/istio-release/proxyv2:1.0.0 -$ docker pull gcr.io/istio-release/servicegraph:1.0.0 -$ docker pull gcr.io/istio-release/sidecar_injector:1.0.0 -$ docker pull quay.io/coreos/hyperkube:v1.7.6_coreos.0 -``` - ---- - -[category](#category) - -#### system configuration - -- on all kubernetes nodes: add kubernetes' repository - -```sh -$ cat < /etc/yum.repos.d/kubernetes.repo -[kubernetes] -name=Kubernetes -baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 -enabled=1 -gpgcheck=1 -repo_gpgcheck=1 -gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg -exclude=kube* -EOF -``` - -- on all kubernetes nodes: update system - -```sh -$ yum update -y -``` - -- on all kubernetes nodes: set SELINUX to permissive mode - -```sh -$ vi /etc/selinux/config -SELINUX=permissive - -$ setenforce 0 -``` - -- on all kubernetes nodes: set iptables parameters - -```sh -$ cat < /etc/sysctl.d/k8s.conf -net.bridge.bridge-nf-call-ip6tables = 1 -net.bridge.bridge-nf-call-iptables = 1 -net.ipv4.ip_forward = 1 -EOF - -$ sysctl --system -``` - -- on all kubernetes nodes: disable swap - -```sh -$ swapoff -a - -# disable swap mount point in /etc/fstab -$ vi /etc/fstab -#/dev/mapper/centos-swap swap swap defaults 0 0 - -# check swap is disabled -$ cat /proc/swaps -Filename Type Size Used Priority -``` - -- on all kubernetes nodes: reboot hosts - -```sh -# reboot hosts -$ reboot -``` - ---- - -[category](#category) - -### kubernetes installation - -#### firewalld and iptables settings - -- on all kubernetes nodes: enable firewalld - -```sh -# restart firewalld service -$ systemctl enable firewalld -$ systemctl restart firewalld -$ systemctl status firewalld -``` - -- master ports list - -Protocol | Direction | Port | Comment -:--- | :--- | :--- | :--- -TCP | Inbound | 16443* | Load balancer Kubernetes API server port -TCP | Inbound | 6443* | Kubernetes API server -TCP | Inbound | 4001 | etcd listen client port -TCP | Inbound | 2379-2380 | etcd server client API -TCP | Inbound | 10250 | Kubelet API -TCP | Inbound | 10251 | kube-scheduler -TCP | Inbound | 10252 | kube-controller-manager -TCP | Inbound | 10255 | Read-only Kubelet API (Deprecated) -TCP | Inbound | 30000-32767 | NodePort Services - -- on all master nodes: set firewalld policy - -```sh -$ firewall-cmd --zone=public --add-port=16443/tcp --permanent -$ firewall-cmd --zone=public --add-port=6443/tcp --permanent -$ firewall-cmd --zone=public --add-port=4001/tcp --permanent -$ firewall-cmd --zone=public --add-port=2379-2380/tcp --permanent -$ firewall-cmd --zone=public --add-port=10250/tcp --permanent -$ firewall-cmd --zone=public --add-port=10251/tcp --permanent -$ firewall-cmd --zone=public --add-port=10252/tcp --permanent -$ firewall-cmd --zone=public --add-port=30000-32767/tcp --permanent - -$ firewall-cmd --reload - -$ firewall-cmd --list-all --zone=public -public (active) - target: default - icmp-block-inversion: no - interfaces: ens2f1 ens1f0 nm-bond - sources: - services: ssh dhcpv6-client - ports: 4001/tcp 6443/tcp 2379-2380/tcp 10250/tcp 10251/tcp 10252/tcp 30000-32767/tcp - protocols: - masquerade: no - forward-ports: - source-ports: - icmp-blocks: - rich rules: -``` - -- worker ports list - -Protocol | Direction | Port | Comment -:--- | :--- | :--- | :--- -TCP | Inbound | 10250 | Kubelet API -TCP | Inbound | 30000-32767 | NodePort Services - -- on all worker nodes: set firewalld policy - -```sh -$ firewall-cmd --zone=public --add-port=10250/tcp --permanent -$ firewall-cmd --zone=public --add-port=30000-32767/tcp --permanent - -$ firewall-cmd --reload - -$ firewall-cmd --list-all --zone=public -public (active) - target: default - icmp-block-inversion: no - interfaces: ens2f1 ens1f0 nm-bond - sources: - services: ssh dhcpv6-client - ports: 10250/tcp 30000-32767/tcp - protocols: - masquerade: no - forward-ports: - source-ports: - icmp-blocks: - rich rules: -``` - -- on all kubernetes nodes: set firewalld to enable kube-proxy port forward - -```sh -$ firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 1 -i docker0 -j ACCEPT -m comment --comment "kube-proxy redirects" -$ firewall-cmd --permanent --direct --add-rule ipv4 filter FORWARD 1 -o docker0 -j ACCEPT -m comment --comment "docker subnet" -$ firewall-cmd --reload - -$ firewall-cmd --direct --get-all-rules -ipv4 filter INPUT 1 -i docker0 -j ACCEPT -m comment --comment 'kube-proxy redirects' -ipv4 filter FORWARD 1 -o docker0 -j ACCEPT -m comment --comment 'docker subnet' - -# restart firewalld service -$ systemctl restart firewalld -``` - -- on all kubernetes nodes: remove this iptables chains, this settings will prevent kube-proxy node port forward. ( Notice: please run this command each time you restart firewalld ) Let's set the crontab. - -```sh -$ crontab -e -0,5,10,15,20,25,30,35,40,45,50,55 * * * * /usr/sbin/iptables -D INPUT -j REJECT --reject-with icmp-host-prohibited -``` - ---- - -[category](#category) - -#### kubernetes and related services installation - -- on all kubernetes nodes: install kubernetes and related services, then start up kubelet and docker daemon - -```sh -$ yum install -y docker-ce-17.12.0.ce-0.2.rc2.el7.centos.x86_64 -$ yum install -y docker-compose-1.9.0-5.el7.noarch -$ systemctl enable docker && systemctl start docker - -$ yum install -y kubelet-1.11.1-0.x86_64 kubeadm-1.11.1-0.x86_64 kubectl-1.11.1-0.x86_64 -$ systemctl enable kubelet && systemctl start kubelet -``` - -- on all master nodes: install and start keepalived service - -```sh -$ yum install -y keepalived -$ systemctl enable keepalived && systemctl restart keepalived -``` - -#### master hosts mutual trust - -- on k8s-master01: set hosts mutual trust - -```sh -$ rm -rf /root/.ssh/* -$ ssh k8s-master01 pwd -$ ssh k8s-master02 rm -rf /root/.ssh/* -$ ssh k8s-master03 rm -rf /root/.ssh/* -$ ssh k8s-master02 mkdir -p /root/.ssh/ -$ ssh k8s-master03 mkdir -p /root/.ssh/ - -$ scp /root/.ssh/known_hosts root@k8s-master02:/root/.ssh/ -$ scp /root/.ssh/known_hosts root@k8s-master03:/root/.ssh/ - -$ ssh-keygen -t rsa -P '' -f /root/.ssh/id_rsa -$ cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys -$ scp /root/.ssh/authorized_keys root@k8s-master02:/root/.ssh/ -``` - -- on k8s-master02: set hosts mutual trust - -```sh -$ ssh-keygen -t rsa -P '' -f /root/.ssh/id_rsa -$ cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys -$ scp /root/.ssh/authorized_keys root@k8s-master03:/root/.ssh/ -``` - -- on k8s-master03: set hosts mutual trust - -```sh -$ ssh-keygen -t rsa -P '' -f /root/.ssh/id_rsa -$ cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys -$ scp /root/.ssh/authorized_keys root@k8s-master01:/root/.ssh/ -$ scp /root/.ssh/authorized_keys root@k8s-master02:/root/.ssh/ -``` - ---- - -[category](#category) - -### masters high availiability installation - -#### create configuration files - -- on k8s-master01: clone kubeadm-ha project source code - -```sh -$ git clone https://github.com/cookeem/kubeadm-ha -``` - -- on k8s-master01: use `create-config.sh` to create relative config files, this script will create all configuration files, follow the setting comment and make sure you set the parameters correctly. - -```sh -$ cd kubeadm-ha - -$ vi create-config.sh -# master keepalived virtual ip address -export K8SHA_VIP=192.168.60.79 -# master01 ip address -export K8SHA_IP1=192.168.60.72 -# master02 ip address -export K8SHA_IP2=192.168.60.77 -# master03 ip address -export K8SHA_IP3=192.168.60.78 -# master keepalived virtual ip hostname -export K8SHA_VHOST=k8s-master-lb -# master01 hostname -export K8SHA_HOST1=k8s-master01 -# master02 hostname -export K8SHA_HOST2=k8s-master02 -# master03 hostname -export K8SHA_HOST3=k8s-master03 -# master01 network interface name -export K8SHA_NETINF1=nm-bond -# master02 network interface name -export K8SHA_NETINF2=nm-bond -# master03 network interface name -export K8SHA_NETINF3=nm-bond -# keepalived auth_pass config -export K8SHA_KEEPALIVED_AUTH=412f7dc3bfed32194d1600c483e10ad1d -# calico reachable ip address -export K8SHA_CALICO_REACHABLE_IP=192.168.60.1 -# kubernetes CIDR pod subnet, if CIDR pod subnet is "172.168.0.0/16" please set to "172.168.0.0" -export K8SHA_CIDR=172.168.0.0 - -# run the shell, it will create 3 masters' kubeadm config files, keepalived config files, nginx load balance config files, and calico config files. -$ ./create-config.sh -create kubeadm-config.yaml files success. config/k8s-master01/kubeadm-config.yaml -create kubeadm-config.yaml files success. config/k8s-master02/kubeadm-config.yaml -create kubeadm-config.yaml files success. config/k8s-master03/kubeadm-config.yaml -create keepalived files success. config/k8s-master01/keepalived/ -create keepalived files success. config/k8s-master02/keepalived/ -create keepalived files success. config/k8s-master03/keepalived/ -create nginx-lb files success. config/k8s-master01/nginx-lb/ -create nginx-lb files success. config/k8s-master02/nginx-lb/ -create nginx-lb files success. config/k8s-master03/nginx-lb/ -create calico.yaml file success. calico/calico.yaml - -# set hostname environment variables -$ export HOST1=k8s-master01 -$ export HOST2=k8s-master02 -$ export HOST3=k8s-master03 - -# copy kubeadm config files to all master nodes, path is /root/ -$ scp -r config/$HOST1/kubeadm-config.yaml $HOST1:/root/ -$ scp -r config/$HOST2/kubeadm-config.yaml $HOST2:/root/ -$ scp -r config/$HOST3/kubeadm-config.yaml $HOST3:/root/ - -# copy keepalived config files to all master nodes, path is /etc/keepalived/category/ -$ scp -r config/$HOST1/keepalived/* $HOST1:/etc/keepalived/ -$ scp -r config/$HOST2/keepalived/* $HOST2:/etc/keepalived/ -$ scp -r config/$HOST3/keepalived/* $HOST3:/etc/keepalived/ - -# copy nginx load balance config files to all master nodes, path is /root/ -$ scp -r config/$HOST1/nginx-lb $HOST1:/root/ -$ scp -r config/$HOST2/nginx-lb $HOST2:/root/ -$ scp -r config/$HOST3/nginx-lb $HOST3:/root/ -``` - ---- - -[category](#category) - -#### kubeadm initialization - -- on k8s-master01: use kubeadm to init a kubernetes cluster - -```sh -# notice: you must save the following output message: kubeadm join --token ${YOUR_TOKEN} --discovery-token-ca-cert-hash ${YOUR_DISCOVERY_TOKEN_CA_CERT_HASH} , this command will use lately. -$ kubeadm init --config /root/kubeadm-config.yaml -kubeadm join 192.168.20.20:6443 --token ${YOUR_TOKEN} --discovery-token-ca-cert-hash sha256:${YOUR_DISCOVERY_TOKEN_CA_CERT_HASH} -``` - -- on all master nodes: set kubectl client environment variable - -```sh -$ cat <> ~/.bashrc -export KUBECONFIG=/etc/kubernetes/admin.conf -EOF - -$ source ~/.bashrc - -# kubectl now can connect the kubernetes cluster -$ kubectl get nodes -``` - -- on k8s-master01: wait until etcd, kube-apiserver, kube-controller-manager, kube-scheduler startup - -```sh -$ kubectl get pods -n kube-system -o wide -NAME READY STATUS RESTARTS AGE IP NODE -... -etcd-k8s-master01 1/1 Running 0 18m 192.168.20.20 k8s-master01 -kube-apiserver-k8s-master01 1/1 Running 0 18m 192.168.20.20 k8s-master01 -kube-controller-manager-k8s-master01 1/1 Running 0 18m 192.168.20.20 k8s-master01 -kube-scheduler-k8s-master01 1/1 Running 1 18m 192.168.20.20 k8s-master01 -... -``` - ---- - -[category](#category) - -#### high availiability configuration - -- on k8s-master01: copy certificates to other master nodes - -```sh -# set master nodes hostname -$ export CONTROL_PLANE_IPS="k8s-master02 k8s-master03" - -# copy certificates to other master nodes -$ for host in ${CONTROL_PLANE_IPS}; do - scp /etc/kubernetes/pki/ca.crt $host:/etc/kubernetes/pki/ca.crt - scp /etc/kubernetes/pki/ca.key $host:/etc/kubernetes/pki/ca.key - scp /etc/kubernetes/pki/sa.key $host:/etc/kubernetes/pki/sa.key - scp /etc/kubernetes/pki/sa.pub $host:/etc/kubernetes/pki/sa.pub - scp /etc/kubernetes/pki/front-proxy-ca.crt $host:/etc/kubernetes/pki/front-proxy-ca.crt - scp /etc/kubernetes/pki/front-proxy-ca.key $host:/etc/kubernetes/pki/front-proxy-ca.key - scp /etc/kubernetes/pki/etcd/ca.crt $host:/etc/kubernetes/pki/etcd/ca.crt - scp /etc/kubernetes/pki/etcd/ca.key $host:/etc/kubernetes/pki/etcd/ca.key - scp /etc/kubernetes/admin.conf $host:/etc/kubernetes/admin.conf -done -``` - -- on k8s-master02: master node join the cluster - -```sh -# create all certificates and kubelet config files -$ kubeadm alpha phase certs all --config /root/kubeadm-config.yaml -$ kubeadm alpha phase kubeconfig controller-manager --config /root/kubeadm-config.yaml -$ kubeadm alpha phase kubeconfig scheduler --config /root/kubeadm-config.yaml -$ kubeadm alpha phase kubelet config write-to-disk --config /root/kubeadm-config.yaml -$ kubeadm alpha phase kubelet write-env-file --config /root/kubeadm-config.yaml -$ kubeadm alpha phase kubeconfig kubelet --config /root/kubeadm-config.yaml -$ systemctl restart kubelet - -# set k8s-master01 and k8s-master02 HOSTNAME and ip address -$ export CP0_IP=192.168.20.20 -$ export CP0_HOSTNAME=k8s-master01 -$ export CP1_IP=192.168.20.21 -$ export CP1_HOSTNAME=k8s-master02 - -# add etcd member to the cluster -$ kubectl exec -n kube-system etcd-${CP0_HOSTNAME} -- etcdctl --ca-file /etc/kubernetes/pki/etcd/ca.crt --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --endpoints=https://${CP0_IP}:2379 member add ${CP1_HOSTNAME} https://${CP1_IP}:2380 -$ kubeadm alpha phase etcd local --config /root/kubeadm-config.yaml - -# prepare to start master -$ kubeadm alpha phase kubeconfig all --config /root/kubeadm-config.yaml -$ kubeadm alpha phase controlplane all --config /root/kubeadm-config.yaml -$ kubeadm alpha phase mark-master --config /root/kubeadm-config.yaml - -# modify /etc/kubernetes/admin.conf server settings -$ sed -i "s/192.168.20.20:6443/192.168.20.21:6443/g" /etc/kubernetes/admin.conf -``` - -- on k8s-master03: master node join the cluster - -```sh -# create all certificates and kubelet config files -$ kubeadm alpha phase certs all --config /root/kubeadm-config.yaml -$ kubeadm alpha phase kubeconfig controller-manager --config /root/kubeadm-config.yaml -$ kubeadm alpha phase kubeconfig scheduler --config /root/kubeadm-config.yaml -$ kubeadm alpha phase kubelet config write-to-disk --config /root/kubeadm-config.yaml -$ kubeadm alpha phase kubelet write-env-file --config /root/kubeadm-config.yaml -$ kubeadm alpha phase kubeconfig kubelet --config /root/kubeadm-config.yaml -$ systemctl restart kubelet - -# set k8s-master01 and k8s-master03 HOSTNAME and ip address -$ export CP0_IP=192.168.20.20 -$ export CP0_HOSTNAME=k8s-master01 -$ export CP2_IP=192.168.20.22 -$ export CP2_HOSTNAME=k8s-master03 - -# add etcd member to the cluster -$ kubectl exec -n kube-system etcd-${CP0_HOSTNAME} -- etcdctl --ca-file /etc/kubernetes/pki/etcd/ca.crt --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --endpoints=https://${CP0_IP}:2379 member add ${CP2_HOSTNAME} https://${CP2_IP}:2380 -$ kubeadm alpha phase etcd local --config /root/kubeadm-config.yaml - -# prepare to start master -$ kubeadm alpha phase kubeconfig all --config /root/kubeadm-config.yaml -$ kubeadm alpha phase controlplane all --config /root/kubeadm-config.yaml -$ kubeadm alpha phase mark-master --config /root/kubeadm-config.yaml - -# modify /etc/kubernetes/admin.conf server settings -$ sed -i "s/192.168.20.20:6443/192.168.20.22:6443/g" /etc/kubernetes/admin.conf -``` - -- on all master nodes: enable hpa to collect performance data form apiserver, add config below in file `/etc/kubernetes/manifests/kube-controller-manager.yaml` - -```sh -$ vi /etc/kubernetes/manifests/kube-controller-manager.yaml - - --horizontal-pod-autoscaler-use-rest-clients=false -``` - -- on all master nodes: enable istio auto-injection, add config below in file `/etc/kubernetes/manifests/kube-apiserver.yaml` - -```sh -$ vi /etc/kubernetes/manifests/kube-apiserver.yaml - - --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota - -# restart kubelet service -systemctl restart kubelet -``` - -- on any master nodes: install calico network addon, after network addon installed the cluster nodes status will be `READY` - -```sh -$ kubectl apply -f calico/ -``` - ---- - -[category](#category) - -### masters load balance settings - -#### keepalived installation - -- on all master nodes: restart keepalived service - -```sh -$ systemctl restart keepalived -$ systemctl status keepalived - -# check keepalived vip -$ curl -k https://k8s-master-lb:6443 -``` - ---- - -[category](#category) - -#### nginx load balance settings - -- on all master nodes: start up nginx load balance - -```sh -# use docker-compose to start up nginx load balance -$ docker-compose --file=/root/nginx-lb/docker-compose.yaml up -d -$ docker-compose --file=/root/nginx-lb/docker-compose.yaml ps - -# check nginx load balance -$ curl -k https://k8s-master-lb:16443 -``` - ---- - -[category](#category) - -#### kube-proxy HA settings - -- on any master nodes: set kube-proxy server settings, make sure this settings use the keepalived virtual IP and nginx load balancer port (here is: https://192.168.20.10:16443) - -```sh -$ kubectl edit -n kube-system configmap/kube-proxy - server: https://192.168.20.10:16443 -``` - -- on any master nodes: restart kube-proxy pods - -```sh -# find all kube-proxy pods -$ kubectl get pods --all-namespaces -o wide | grep proxy - -# delete and restart all kube-proxy pods -$ kubectl delete pod -n kube-system kube-proxy-XXX -``` - ---- - -[category](#category) - -#### high availiability verify - -- on any master nodes: check cluster running status - -```sh -# check kubernetes nodes status -$ kubectl get nodes -NAME STATUS ROLES AGE VERSION -k8s-master01 Ready master 1h v1.11.1 -k8s-master02 Ready master 58m v1.11.1 -k8s-master03 Ready master 55m v1.11.1 - -# check kube-system pods running status -$ kubectl get pods -n kube-system -o wide -NAME READY STATUS RESTARTS AGE IP NODE -calico-node-nxskr 2/2 Running 0 46m 192.168.20.22 k8s-master03 -calico-node-xv5xt 2/2 Running 0 46m 192.168.20.20 k8s-master01 -calico-node-zsmgp 2/2 Running 0 46m 192.168.20.21 k8s-master02 -coredns-78fcdf6894-kfzc7 1/1 Running 0 1h 172.168.2.3 k8s-master03 -coredns-78fcdf6894-t957l 1/1 Running 0 46m 172.168.1.2 k8s-master02 -etcd-k8s-master01 1/1 Running 0 1h 192.168.20.20 k8s-master01 -etcd-k8s-master02 1/1 Running 0 58m 192.168.20.21 k8s-master02 -etcd-k8s-master03 1/1 Running 0 54m 192.168.20.22 k8s-master03 -kube-apiserver-k8s-master01 1/1 Running 0 52m 192.168.20.20 k8s-master01 -kube-apiserver-k8s-master02 1/1 Running 0 52m 192.168.20.21 k8s-master02 -kube-apiserver-k8s-master03 1/1 Running 0 51m 192.168.20.22 k8s-master03 -kube-controller-manager-k8s-master01 1/1 Running 0 34m 192.168.20.20 k8s-master01 -kube-controller-manager-k8s-master02 1/1 Running 0 33m 192.168.20.21 k8s-master02 -kube-controller-manager-k8s-master03 1/1 Running 0 33m 192.168.20.22 k8s-master03 -kube-proxy-g9749 1/1 Running 0 36m 192.168.20.22 k8s-master03 -kube-proxy-lhzhb 1/1 Running 0 35m 192.168.20.20 k8s-master01 -kube-proxy-x8jwt 1/1 Running 0 36m 192.168.20.21 k8s-master02 -kube-scheduler-k8s-master01 1/1 Running 1 1h 192.168.20.20 k8s-master01 -kube-scheduler-k8s-master02 1/1 Running 0 57m 192.168.20.21 k8s-master02 -kube-scheduler-k8s-master03 1/1 Running 1 54m 192.168.20.22 k8s-master03 -``` - ---- - -[category](#category) - -#### kubernetes addons installation - -- on any master nodes: enable master node pod schedulable - -```sh -$ kubectl taint nodes --all node-role.kubernetes.io/master- -``` - -- on any master nodes: install metrics-server, after v1.11.0 heapster is deprecated for performance data collection, it use metrics-server - -```sh -$ kubectl apply -f metrics-server/ - -# wait for 5 minutes, use kubectl top to check the pod performance usage -$ kubectl top pods -n kube-system -NAME CPU(cores) MEMORY(bytes) -calico-node-wkstv 47m 113Mi -calico-node-x2sn5 36m 104Mi -calico-node-xnh6s 32m 106Mi -coredns-78fcdf6894-2xc6s 14m 30Mi -coredns-78fcdf6894-rk6ch 10m 22Mi -kube-apiserver-k8s-master01 163m 816Mi -kube-apiserver-k8s-master02 79m 617Mi -kube-apiserver-k8s-master03 73m 614Mi -kube-controller-manager-k8s-master01 52m 141Mi -kube-controller-manager-k8s-master02 0m 14Mi -kube-controller-manager-k8s-master03 0m 13Mi -kube-proxy-269t2 4m 21Mi -kube-proxy-6jc8n 9m 37Mi -kube-proxy-7n8xb 9m 39Mi -kube-scheduler-k8s-master01 20m 25Mi -kube-scheduler-k8s-master02 15m 19Mi -kube-scheduler-k8s-master03 15m 19Mi -metrics-server-77b77f5fc6-jm8t6 3m 43Mi -``` - -- on any master nodes: install heapster, after v1.11.0 heapster is deprecated for performance data collection, it use metrics-server. But kube-dashboard use heapster to display performance info, so we install it. - -```sh -# install heapster, wait for 5 minutes -$ kubectl apply -f heapster/ -``` - -- on any master nodes: install kube-dashboard - -```sh -# install kube-dashboard -$ kubectl apply -f dashboard/ -``` - -> after install, open kube-dashboard in web browser, it need to login with token: https://k8s-master-lb:30000/ - -![dashboard-login](images/dashboard-login.png) - -- on any master nodes: get kube-dashboard login token - -```sh -# get kube-dashboard login token -$ kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}') -``` - -> login to kube-dashboard, you can see all pods performance metrics - -![dashboard](images/dashboard.png) - -- on any master nodes: install traefik - -```sh -# create k8s-master-lb domain certificate -$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=k8s-master-lb" - -# create kubernetes secret -kubectl -n kube-system create secret generic traefik-cert --from-file=tls.key --from-file=tls.crt - -# install traefik -$ kubectl apply -f traefik/ -``` - -> after install use web browser to open traefik admin webUI: http://k8s-master-lb:30011/ - -![traefik](images/traefik.png) - -- on any master nodes: install istio - -```sh -# install istio -$ kubectl apply -f istio/ - -# check all istio pods -$ kubectl get pods -n istio-system -NAME READY STATUS RESTARTS AGE -grafana-69c856fc69-jbx49 1/1 Running 1 21m -istio-citadel-7c4fc8957b-vdbhp 1/1 Running 1 21m -istio-cleanup-secrets-5g95n 0/1 Completed 0 21m -istio-egressgateway-64674bd988-44fg8 1/1 Running 0 18m -istio-egressgateway-64674bd988-dgvfm 1/1 Running 1 16m -istio-egressgateway-64674bd988-fprtc 1/1 Running 0 18m -istio-egressgateway-64674bd988-kl6pw 1/1 Running 3 16m -istio-egressgateway-64674bd988-nphpk 1/1 Running 3 16m -istio-galley-595b94cddf-c5ctw 1/1 Running 70 21m -istio-grafana-post-install-nhs47 0/1 Completed 0 21m -istio-ingressgateway-4vtk5 1/1 Running 2 21m -istio-ingressgateway-5rscp 1/1 Running 3 21m -istio-ingressgateway-6z95f 1/1 Running 3 21m -istio-policy-589977bff5-jx5fd 2/2 Running 3 21m -istio-policy-589977bff5-n74q8 2/2 Running 3 21m -istio-sidecar-injector-86c4d57d56-mfnbp 1/1 Running 39 21m -istio-statsd-prom-bridge-5698d5798c-xdpp6 1/1 Running 1 21m -istio-telemetry-85d6475bfd-8lvsm 2/2 Running 2 21m -istio-telemetry-85d6475bfd-bfjsn 2/2 Running 2 21m -istio-telemetry-85d6475bfd-d9ld9 2/2 Running 2 21m -istio-tracing-bd5765b5b-cmszp 1/1 Running 1 21m -prometheus-77c5fc7cd-zf7zr 1/1 Running 1 21m -servicegraph-6b99c87849-l6zm6 1/1 Running 1 21m -``` - -- on any master nodes: install prometheus - -```sh -# install prometheus -$ kubectl apply -f prometheus/ -``` - -> after install, open prometheus admin webUI: http://k8s-master-lb:30013/ - -![prometheus](images/prometheus.png) - -> open grafana admin webUI (user and password is`admin`): http://k8s-master-lb:30006/ -> after login, add prometheus datasource: http://k8s-master-lb:30006/datasources - -![grafana-datasource](images/grafana-datasource.png) - -> import dashboard: http://k8s-master-lb:30006/dashboard/import import all files under `heapster/grafana-dashboard` directory, dashboard `Kubernetes App Metrics`, `Kubernetes cluster monitoring (via Prometheus)` - -![grafana-import](images/grafana-import.png) - -> dashboard you imported: - -![grafana-cluster](images/grafana-cluster.png) - -![grafana-app](images/grafana-app.png) - ---- - -[category](#category) - -### workers join kubernetes cluster - -#### workers join HA cluster - -- on all worker nodes: join kubernetes cluster - -```sh -$ kubeadm reset - -# use kubeadm to join the cluster, here we use the k8s-master01 apiserver address and port. -$ kubeadm join 192.168.20.20:6443 --token ${YOUR_TOKEN} --discovery-token-ca-cert-hash sha256:${YOUR_DISCOVERY_TOKEN_CA_CERT_HASH} - - -# set the `/etc/kubernetes/*.conf` server settings, make sure this settings use the keepalived virtual IP and nginx load balancer port (here is: https://192.168.20.10:16443) -$ sed -i "s/192.168.20.20:6443/192.168.20.10:16443/g" /etc/kubernetes/bootstrap-kubelet.conf -$ sed -i "s/192.168.20.20:6443/192.168.20.10:16443/g" /etc/kubernetes/kubelet.conf - -# restart docker and kubelet service -$ systemctl restart docker kubelet -``` - -- on any master nodes: check all nodes status - -```sh -$ kubectl get nodes -NAME STATUS ROLES AGE VERSION -k8s-master01 Ready master 1h v1.11.1 -k8s-master02 Ready master 58m v1.11.1 -k8s-master03 Ready master 55m v1.11.1 -k8s-node01 Ready 30m v1.11.1 -k8s-node02 Ready 24m v1.11.1 -k8s-node03 Ready 22m v1.11.1 -k8s-node04 Ready 22m v1.11.1 -k8s-node05 Ready 16m v1.11.1 -k8s-node06 Ready 13m v1.11.1 -k8s-node07 Ready 11m v1.11.1 -k8s-node08 Ready 10m v1.11.1 -``` - ---- - -[category](#category) - -### verify kubernetes cluster installation - -#### verify kubernetes cluster high availiablity installation - -- NodePort testing - -```sh -# create a nginx deployment, replicas=3 -$ kubectl run nginx --image=nginx --replicas=3 --port=80 -deployment "nginx" created - -# check nginx pods status -$ kubectl get pods -l=run=nginx -o wide -NAME READY STATUS RESTARTS AGE IP NODE -nginx-58b94844fd-jvlqh 1/1 Running 0 9s 172.168.7.2 k8s-node05 -nginx-58b94844fd-mkt72 1/1 Running 0 9s 172.168.9.2 k8s-node07 -nginx-58b94844fd-xhb8x 1/1 Running 0 9s 172.168.11.2 k8s-node09 - -# create nginx NodePort service -$ kubectl expose deployment nginx --type=NodePort --port=80 -service "nginx" exposed - -# check nginx service status -$ kubectl get svc -l=run=nginx -o wide -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR -nginx NodePort 10.106.129.121 80:31443/TCP 7s run=nginx - -# check nginx NodePort service accessibility -$ curl k8s-master-lb:31443 - - - -Welcome to nginx! - - - -

Welcome to nginx!

-

If you see this page, the nginx web server is successfully installed and -working. Further configuration is required.

- -

For online documentation and support please refer to -nginx.org.
-Commercial support is available at -nginx.com.

- -

Thank you for using nginx.

- - -``` - -- pods connectivity testing - -```sh -kubectl run nginx-client -ti --rm --image=alpine -- ash -/ # wget -O - nginx -Connecting to nginx (10.102.101.78:80) -index.html 100% |*****************************************| 612 0:00:00 ETA - - - - -Welcome to nginx! - - - -

Welcome to nginx!

-

If you see this page, the nginx web server is successfully installed and -working. Further configuration is required.

- -

For online documentation and support please refer to -nginx.org.
-Commercial support is available at -nginx.com.

- -

Thank you for using nginx.

- - - -# remove all test nginx deployment and service -kubectl delete deploy,svc nginx -``` - -- HPA testing - -```sh -# create test nginx-server -kubectl run nginx-server --requests=cpu=10m --image=nginx --port=80 -kubectl expose deployment nginx-server --port=80 - -# create hpa -kubectl autoscale deployment nginx-server --cpu-percent=10 --min=1 --max=10 -kubectl get hpa -kubectl describe hpa nginx-server - -# increase nginx-server load -kubectl run -ti --rm load-generator --image=busybox -- ash -wget -q -O- http://nginx-server.default.svc.cluster.local > /dev/null -while true; do wget -q -O- http://nginx-server.default.svc.cluster.local > /dev/null; done - -# it may take a few minutes to stabilize the number of replicas. Since the amount of load is not controlled in any way it may happen that the final number of replicas will differ from this example. - -kubectl get hpa -w - -# remove all test deployment service and HPA -kubectl delete deploy,svc,hpa nginx-server -``` - ---- - -[category](#category) - -- now kubernetes high availiability cluster setup successfully 😃 +咨询QQ727585266 diff --git a/README_CN.md b/README_CN.md deleted file mode 100644 index be75207..0000000 --- a/README_CN.md +++ /dev/null @@ -1,1123 +0,0 @@ -``` - - 安装过程请:https://www.cnblogs.com/dukuan/p/9856269.html - -``` - -# kubeadm-highavailiability - 基于kubeadm的kubernetes高可用集群部署,支持v1.11.x v1.9.x v1.7.x v1.6.x版本 - -![k8s logo](images/Kubernetes.png) - -- [中文文档(for v1.11.x版本)](README_CN.md) -- [English document(for v1.11.x version)](README.md) -- [中文文档(for v1.9.x版本)](v1.9/README_CN.md) -- [English document(for v1.9.x version)](v1.9/README.md) -- [中文文档(for v1.7.x版本)](v1.7/README_CN.md) -- [English document(for v1.7.x version)](v1.7/README.md) -- [中文文档(for v1.6.x版本)](v1.6/README_CN.md) -- [English document(for v1.6.x version)](v1.6/README.md) - ---- - -- [GitHub项目地址](https://github.com/cookeem/kubeadm-ha/) -- [OSChina项目地址](https://git.oschina.net/cookeem/kubeadm-ha/) - ---- - -- 该指引适用于v1.11.x版本的kubernetes集群 - -> v1.11.x版本支持在control plane上启动TLS的etcd高可用集群。 - -### 目录 - -1. [部署架构](#部署架构) - 1. [概要部署架构](#概要部署架构) - 1. [详细部署架构](#详细部署架构) - 1. [主机节点清单](#主机节点清单) -1. [安装前准备](#安装前准备) - 1. [版本信息](#版本信息) - 1. [所需docker镜像](#所需docker镜像) - 1. [系统设置](#系统设置) -1. [kubernetes安装](#kubernetes安装) - 1. [firewalld和iptables相关端口设置](#firewalld和iptables相关端口设置) - 1. [kubernetes相关服务安装](#kubernetes相关服务安装) - 1. [master节点互信设置](#master节点互信设置) -1. [master高可用安装](#master高可用安装) - 1. [配置文件初始化](#配置文件初始化) - 1. [kubeadm初始化](#kubeadm初始化) - 1. [高可用配置](#高可用配置) -1. [master负载均衡设置](#master负载均衡设置) - 1. [keepalived安装配置](#keepalived安装配置) - 1. [nginx负载均衡配置](#nginx负载均衡配置) - 1. [kube-proxy高可用设置](#kube-proxy高可用设置) - 1. [验证高可用状态](#验证高可用状态) - 1. [基础组件安装](#基础组件安装) -1. [worker节点设置](#worker节点设置) - 1. [worker加入高可用集群](#worker加入高可用集群) -1. [集群验证](#集群验证) - 1. [验证集群高可用设置](#验证集群高可用设置) - -### 部署架构 - -#### 概要部署架构 - -![ha logo](images/ha.png) - -- kubernetes高可用的核心架构是master的高可用,kubectl、客户端以及nodes访问load balancer实现高可用。 - ---- -[返回目录](#目录) - -#### 详细部署架构 - -![k8s ha](images/k8s-ha.png) - -- kubernetes组件说明 - -> kube-apiserver:集群核心,集群API接口、集群各个组件通信的中枢;集群安全控制; -> etcd:集群的数据中心,用于存放集群的配置以及状态信息,非常重要,如果数据丢失那么集群将无法恢复;因此高可用集群部署首先就是etcd是高可用集群; -> kube-scheduler:集群Pod的调度中心;默认kubeadm安装情况下--leader-elect参数已经设置为true,保证master集群中只有一个kube-scheduler处于活跃状态; -> kube-controller-manager:集群状态管理器,当集群状态与期望不同时,kcm会努力让集群恢复期望状态,比如:当一个pod死掉,kcm会努力新建一个pod来恢复对应replicas set期望的状态;默认kubeadm安装情况下--leader-elect参数已经设置为true,保证master集群中只有一个kube-controller-manager处于活跃状态; -> kubelet: kubernetes node agent,负责与node上的docker engine打交道; -> kube-proxy: 每个node上一个,负责service vip到endpoint pod的流量转发,当前主要通过设置iptables规则实现。 - -- 负载均衡 - -> keepalived集群设置一个虚拟ip地址,虚拟ip地址指向k8s-master01、k8s-master02、k8s-master03。 -> nginx用于k8s-master01、k8s-master02、k8s-master03的apiserver的负载均衡。外部kubectl以及nodes访问apiserver的时候就可以用过keepalived的虚拟ip(192.168.20.10)以及nginx端口(16443)访问master集群的apiserver。 - ---- - -[返回目录](#目录) - -#### 主机节点清单 - -主机名 | IP地址 | 说明 | 组件 -:--- | :--- | :--- | :--- -k8s-master01 ~ 03 | 192.168.20.20 ~ 22 | master节点 * 3 | keepalived、nginx、etcd、kubelet、kube-apiserver -k8s-master-lb | 192.168.20.10 | keepalived虚拟IP | 无 -k8s-node01 ~ 08 | 192.168.20.30 ~ 37 | worker节点 * 8 | kubelet - ---- - -[返回目录](#目录) - -### 安装前准备 - -#### 版本信息 - -- Linux版本:CentOS 7.4.1708 - -- 内核版本: 4.6.4-1.el7.elrepo.x86_64 - -```sh -$ cat /etc/redhat-release -CentOS Linux release 7.4.1708 (Core) - -$ uname -r -4.6.4-1.el7.elrepo.x86_64 -``` - -- docker版本:17.12.0-ce-rc2 - -```sh -$ docker version -Client: - Version: 17.12.0-ce-rc2 - API version: 1.35 - Go version: go1.9.2 - Git commit: f9cde63 - Built: Tue Dec 12 06:42:20 2017 - OS/Arch: linux/amd64 - -Server: - Engine: - Version: 17.12.0-ce-rc2 - API version: 1.35 (minimum version 1.12) - Go version: go1.9.2 - Git commit: f9cde63 - Built: Tue Dec 12 06:44:50 2017 - OS/Arch: linux/amd64 - Experimental: false -``` - -- kubeadm版本:v1.11.1 - -```sh -$ kubeadm version -kubeadm version: &version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:50:16Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"} -``` - -- kubelet版本:v1.11.1 - -```sh -$ kubelet --version -Kubernetes v1.11.1 -``` - -- 网络组件 - -> calico - ---- - -[返回目录](#目录) - -#### 所需docker镜像 - -- 相关docker镜像以及版本 - -```sh -# kuberentes basic components - -# 通过kubeadm 获取基础组件镜像清单 -$ kubeadm config images list --kubernetes-version=v1.11.1 -k8s.gcr.io/kube-apiserver-amd64:v1.11.1 -k8s.gcr.io/kube-controller-manager-amd64:v1.11.1 -k8s.gcr.io/kube-scheduler-amd64:v1.11.1 -k8s.gcr.io/kube-proxy-amd64:v1.11.1 -k8s.gcr.io/pause:3.1 -k8s.gcr.io/etcd-amd64:3.2.18 -k8s.gcr.io/coredns:1.1.3 - -# 通过kubeadm 拉取基础镜像 -$ kubeadm config images pull --kubernetes-version=v1.11.1 - -# kubernetes networks add ons -$ docker pull quay.io/calico/typha:v0.7.4 -$ docker pull quay.io/calico/node:v3.1.3 -$ docker pull quay.io/calico/cni:v3.1.3 - -# kubernetes metrics server -$ docker pull gcr.io/google_containers/metrics-server-amd64:v0.2.1 - -# kubernetes dashboard -$ docker pull gcr.io/google_containers/kubernetes-dashboard-amd64:v1.8.3 - -# kubernetes heapster -$ docker pull k8s.gcr.io/heapster-amd64:v1.5.4 -$ docker pull k8s.gcr.io/heapster-influxdb-amd64:v1.5.2 -$ docker pull k8s.gcr.io/heapster-grafana-amd64:v5.0.4 - -# kubernetes apiserver load balancer -$ docker pull nginx:latest - -# prometheus -$ docker pull prom/prometheus:v2.3.1 - -# traefik -$ docker pull traefik:v1.6.3 - -# istio -$ docker pull docker.io/jaegertracing/all-in-one:1.5 -$ docker pull docker.io/prom/prometheus:v2.3.1 -$ docker pull docker.io/prom/statsd-exporter:v0.6.0 -$ docker pull gcr.io/istio-release/citadel:1.0.0 -$ docker pull gcr.io/istio-release/galley:1.0.0 -$ docker pull gcr.io/istio-release/grafana:1.0.0 -$ docker pull gcr.io/istio-release/mixer:1.0.0 -$ docker pull gcr.io/istio-release/pilot:1.0.0 -$ docker pull gcr.io/istio-release/proxy_init:1.0.0 -$ docker pull gcr.io/istio-release/proxyv2:1.0.0 -$ docker pull gcr.io/istio-release/servicegraph:1.0.0 -$ docker pull gcr.io/istio-release/sidecar_injector:1.0.0 -$ docker pull quay.io/coreos/hyperkube:v1.7.6_coreos.0 -``` - ---- - -[返回目录](#目录) - -#### 系统设置 - -- 在所有kubernetes节点上增加kubernetes仓库 - -```sh -$ cat < /etc/yum.repos.d/kubernetes.repo -[kubernetes] -name=Kubernetes -baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 -enabled=1 -gpgcheck=1 -repo_gpgcheck=1 -gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg -exclude=kube* -EOF -``` - -- 在所有kubernetes节点上进行系统更新 - -```sh -$ yum update -y -``` - -- 在所有kubernetes节点上设置SELINUX为permissive模式 - -```sh -$ vi /etc/selinux/config -SELINUX=permissive - -$ setenforce 0 -``` - -- 在所有kubernetes节点上设置iptables参数 - -```sh -$ cat < /etc/sysctl.d/k8s.conf -net.bridge.bridge-nf-call-ip6tables = 1 -net.bridge.bridge-nf-call-iptables = 1 -net.ipv4.ip_forward = 1 -EOF - -$ sysctl --system -``` - -- 在所有kubernetes节点上禁用swap - -```sh -$ swapoff -a - -# 禁用fstab中的swap项目 -$ vi /etc/fstab -#/dev/mapper/centos-swap swap swap defaults 0 0 - -# 确认swap已经被禁用 -$ cat /proc/swaps -Filename Type Size Used Priority -``` - -- 在所有kubernetes节点上重启主机 - -```sh -# 重启主机 -$ reboot -``` - ---- - -[返回目录](#目录) - -### kubernetes安装 - -#### firewalld和iptables相关端口设置 - -- 所有节点开启防火墙 - -```sh -# 重启防火墙 -$ systemctl enable firewalld -$ systemctl restart firewalld -$ systemctl status firewalld -``` - -- 相关端口(master) - -协议 | 方向 | 端口 | 说明 -:--- | :--- | :--- | :--- -TCP | Inbound | 16443* | Load balancer Kubernetes API server port -TCP | Inbound | 6443* | Kubernetes API server -TCP | Inbound | 4001 | etcd listen client port -TCP | Inbound | 2379-2380 | etcd server client API -TCP | Inbound | 10250 | Kubelet API -TCP | Inbound | 10251 | kube-scheduler -TCP | Inbound | 10252 | kube-controller-manager -TCP | Inbound | 10255 | Read-only Kubelet API (Deprecated) -TCP | Inbound | 30000-32767 | NodePort Services - -- 设置防火墙策略 - -```sh -$ firewall-cmd --zone=public --add-port=16443/tcp --permanent -$ firewall-cmd --zone=public --add-port=6443/tcp --permanent -$ firewall-cmd --zone=public --add-port=4001/tcp --permanent -$ firewall-cmd --zone=public --add-port=2379-2380/tcp --permanent -$ firewall-cmd --zone=public --add-port=10250/tcp --permanent -$ firewall-cmd --zone=public --add-port=10251/tcp --permanent -$ firewall-cmd --zone=public --add-port=10252/tcp --permanent -$ firewall-cmd --zone=public --add-port=30000-32767/tcp --permanent - -$ firewall-cmd --reload - -$ firewall-cmd --list-all --zone=public -public (active) - target: default - icmp-block-inversion: no - interfaces: ens2f1 ens1f0 nm-bond - sources: - services: ssh dhcpv6-client - ports: 4001/tcp 6443/tcp 2379-2380/tcp 10250/tcp 10251/tcp 10252/tcp 30000-32767/tcp - protocols: - masquerade: no - forward-ports: - source-ports: - icmp-blocks: - rich rules: -``` - -- 相关端口(worker) - -协议 | 方向 | 端口 | 说明 -:--- | :--- | :--- | :--- -TCP | Inbound | 10250 | Kubelet API -TCP | Inbound | 30000-32767 | NodePort Services - -- 设置防火墙策略 - -```sh -$ firewall-cmd --zone=public --add-port=10250/tcp --permanent -$ firewall-cmd --zone=public --add-port=30000-32767/tcp --permanent - -$ firewall-cmd --reload - -$ firewall-cmd --list-all --zone=public -public (active) - target: default - icmp-block-inversion: no - interfaces: ens2f1 ens1f0 nm-bond - sources: - services: ssh dhcpv6-client - ports: 10250/tcp 30000-32767/tcp - protocols: - masquerade: no - forward-ports: - source-ports: - icmp-blocks: - rich rules: -``` - -- 在所有kubernetes节点上允许kube-proxy的forward - -```sh -$ firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 1 -i docker0 -j ACCEPT -m comment --comment "kube-proxy redirects" -$ firewall-cmd --permanent --direct --add-rule ipv4 filter FORWARD 1 -o docker0 -j ACCEPT -m comment --comment "docker subnet" -$ firewall-cmd --reload - -$ firewall-cmd --direct --get-all-rules -ipv4 filter INPUT 1 -i docker0 -j ACCEPT -m comment --comment 'kube-proxy redirects' -ipv4 filter FORWARD 1 -o docker0 -j ACCEPT -m comment --comment 'docker subnet' - -# 重启防火墙 -$ systemctl restart firewalld -``` - -- 解决kube-proxy无法启用nodePort,重启firewalld必须执行以下命令,在所有节点设置定时任务 - -```sh -$ crontab -e -0,5,10,15,20,25,30,35,40,45,50,55 * * * * /usr/sbin/iptables -D INPUT -j REJECT --reject-with icmp-host-prohibited -``` - ---- - -[返回目录](#目录) - -#### kubernetes相关服务安装 - -- 在所有kubernetes节点上安装并启动kubernetes - -```sh -$ yum install -y docker-ce-17.12.0.ce-0.2.rc2.el7.centos.x86_64 -$ yum install -y docker-compose-1.9.0-5.el7.noarch -$ systemctl enable docker && systemctl start docker - -$ yum install -y kubelet-1.11.1-0.x86_64 kubeadm-1.11.1-0.x86_64 kubectl-1.11.1-0.x86_64 -$ systemctl enable kubelet && systemctl start kubelet -``` - -- 在所有master节点安装并启动keepalived - -```sh -$ yum install -y keepalived -$ systemctl enable keepalived && systemctl restart keepalived -``` - -#### master节点互信设置 - -- 在k8s-master01节点上设置节点互信 - -```sh -$ rm -rf /root/.ssh/* -$ ssh k8s-master01 pwd -$ ssh k8s-master02 rm -rf /root/.ssh/* -$ ssh k8s-master03 rm -rf /root/.ssh/* -$ ssh k8s-master02 mkdir -p /root/.ssh/ -$ ssh k8s-master03 mkdir -p /root/.ssh/ - -$ scp /root/.ssh/known_hosts root@k8s-master02:/root/.ssh/ -$ scp /root/.ssh/known_hosts root@k8s-master03:/root/.ssh/ - -$ ssh-keygen -t rsa -P '' -f /root/.ssh/id_rsa -$ cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys -$ scp /root/.ssh/authorized_keys root@k8s-master02:/root/.ssh/ -``` - -- 在k8s-master02节点上设置节点互信 - -```sh -$ ssh-keygen -t rsa -P '' -f /root/.ssh/id_rsa -$ cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys -$ scp /root/.ssh/authorized_keys root@k8s-master03:/root/.ssh/ -``` - -- 在k8s-master03节点上设置节点互信 - -```sh -$ ssh-keygen -t rsa -P '' -f /root/.ssh/id_rsa -$ cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys -$ scp /root/.ssh/authorized_keys root@k8s-master01:/root/.ssh/ -$ scp /root/.ssh/authorized_keys root@k8s-master02:/root/.ssh/ -``` - ---- - -[返回目录](#目录) - -### master高可用安装 - -#### 配置文件初始化 - -- 在k8s-master01上克隆kubeadm-ha项目源码 - -```sh -$ git clone https://github.com/cookeem/kubeadm-ha -``` - -- 在k8s-master01上通过`create-config.sh`脚本创建相关配置文件 - -```sh -$ cd kubeadm-ha - -# 根据create-config.sh的提示,修改以下配置信息 -$ vi create-config.sh -# master keepalived virtual ip address -export K8SHA_VIP=192.168.60.79 -# master01 ip address -export K8SHA_IP1=192.168.60.72 -# master02 ip address -export K8SHA_IP2=192.168.60.77 -# master03 ip address -export K8SHA_IP3=192.168.60.78 -# master keepalived virtual ip hostname -export K8SHA_VHOST=k8s-master-lb -# master01 hostname -export K8SHA_HOST1=k8s-master01 -# master02 hostname -export K8SHA_HOST2=k8s-master02 -# master03 hostname -export K8SHA_HOST3=k8s-master03 -# master01 network interface name -export K8SHA_NETINF1=nm-bond -# master02 network interface name -export K8SHA_NETINF2=nm-bond -# master03 network interface name -export K8SHA_NETINF3=nm-bond -# keepalived auth_pass config -export K8SHA_KEEPALIVED_AUTH=412f7dc3bfed32194d1600c483e10ad1d -# calico reachable ip address -export K8SHA_CALICO_REACHABLE_IP=192.168.60.1 -# kubernetes CIDR pod subnet, if CIDR pod subnet is "172.168.0.0/16" please set to "172.168.0.0" -export K8SHA_CIDR=172.168.0.0 - -# 以下脚本会创建3个master节点的kubeadm配置文件,keepalived配置文件,nginx负载均衡配置文件,以及calico配置文件 -$ ./create-config.sh -create kubeadm-config.yaml files success. config/k8s-master01/kubeadm-config.yaml -create kubeadm-config.yaml files success. config/k8s-master02/kubeadm-config.yaml -create kubeadm-config.yaml files success. config/k8s-master03/kubeadm-config.yaml -create keepalived files success. config/k8s-master01/keepalived/ -create keepalived files success. config/k8s-master02/keepalived/ -create keepalived files success. config/k8s-master03/keepalived/ -create nginx-lb files success. config/k8s-master01/nginx-lb/ -create nginx-lb files success. config/k8s-master02/nginx-lb/ -create nginx-lb files success. config/k8s-master03/nginx-lb/ -create calico.yaml file success. calico/calico.yaml - -# 设置相关hostname变量 -$ export HOST1=k8s-master01 -$ export HOST2=k8s-master02 -$ export HOST3=k8s-master03 - -# 把kubeadm配置文件放到各个master节点的/root/目录 -$ scp -r config/$HOST1/kubeadm-config.yaml $HOST1:/root/ -$ scp -r config/$HOST2/kubeadm-config.yaml $HOST2:/root/ -$ scp -r config/$HOST3/kubeadm-config.yaml $HOST3:/root/ - -# 把keepalived配置文件放到各个master节点的/etc/keepalived/目录 -$ scp -r config/$HOST1/keepalived/* $HOST1:/etc/keepalived/ -$ scp -r config/$HOST2/keepalived/* $HOST2:/etc/keepalived/ -$ scp -r config/$HOST3/keepalived/* $HOST3:/etc/keepalived/ - -# 把nginx负载均衡配置文件放到各个master节点的/root/目录 -$ scp -r config/$HOST1/nginx-lb $HOST1:/root/ -$ scp -r config/$HOST2/nginx-lb $HOST2:/root/ -$ scp -r config/$HOST3/nginx-lb $HOST3:/root/ -``` - ---- - -[返回目录](#目录) - -#### kubeadm初始化 - -- 在k8s-master01节点上使用kubeadm进行kubernetes集群初始化 - -```sh -# 执行kubeadm init之后务必记录执行结果输出的${YOUR_TOKEN}以及${YOUR_DISCOVERY_TOKEN_CA_CERT_HASH} -$ kubeadm init --config /root/kubeadm-config.yaml -kubeadm join 192.168.20.20:6443 --token ${YOUR_TOKEN} --discovery-token-ca-cert-hash sha256:${YOUR_DISCOVERY_TOKEN_CA_CERT_HASH} -``` - -- 在所有master节点上设置kubectl的配置文件变量 - -```sh -$ cat <> ~/.bashrc -export KUBECONFIG=/etc/kubernetes/admin.conf -EOF - -$ source ~/.bashrc - -# 验证是否可以使用kubectl客户端连接集群 -$ kubectl get nodes -``` - -- 在k8s-master01节点上等待 etcd / kube-apiserver / kube-controller-manager / kube-scheduler 启动 - -```sh -$ kubectl get pods -n kube-system -o wide -NAME READY STATUS RESTARTS AGE IP NODE -... -etcd-k8s-master01 1/1 Running 0 18m 192.168.20.20 k8s-master01 -kube-apiserver-k8s-master01 1/1 Running 0 18m 192.168.20.20 k8s-master01 -kube-controller-manager-k8s-master01 1/1 Running 0 18m 192.168.20.20 k8s-master01 -kube-scheduler-k8s-master01 1/1 Running 1 18m 192.168.20.20 k8s-master01 -... -``` - ---- - -[返回目录](#目录) - -#### 高可用配置 - -- 在k8s-master01上把证书复制到其他master - -```sh -# 根据实际情况修改以下HOSTNAMES变量 -$ export CONTROL_PLANE_IPS="k8s-master02 k8s-master03" - -# 把证书复制到其他master节点 -$ for host in ${CONTROL_PLANE_IPS}; do - scp /etc/kubernetes/pki/ca.crt $host:/etc/kubernetes/pki/ca.crt - scp /etc/kubernetes/pki/ca.key $host:/etc/kubernetes/pki/ca.key - scp /etc/kubernetes/pki/sa.key $host:/etc/kubernetes/pki/sa.key - scp /etc/kubernetes/pki/sa.pub $host:/etc/kubernetes/pki/sa.pub - scp /etc/kubernetes/pki/front-proxy-ca.crt $host:/etc/kubernetes/pki/front-proxy-ca.crt - scp /etc/kubernetes/pki/front-proxy-ca.key $host:/etc/kubernetes/pki/front-proxy-ca.key - scp /etc/kubernetes/pki/etcd/ca.crt $host:/etc/kubernetes/pki/etcd/ca.crt - scp /etc/kubernetes/pki/etcd/ca.key $host:/etc/kubernetes/pki/etcd/ca.key - scp /etc/kubernetes/admin.conf $host:/etc/kubernetes/admin.conf -done -``` - -- 在k8s-master02上把节点加入集群 - -```sh -# 创建相关的证书以及kubelet配置文件 -$ kubeadm alpha phase certs all --config /root/kubeadm-config.yaml -$ kubeadm alpha phase kubeconfig controller-manager --config /root/kubeadm-config.yaml -$ kubeadm alpha phase kubeconfig scheduler --config /root/kubeadm-config.yaml -$ kubeadm alpha phase kubelet config write-to-disk --config /root/kubeadm-config.yaml -$ kubeadm alpha phase kubelet write-env-file --config /root/kubeadm-config.yaml -$ kubeadm alpha phase kubeconfig kubelet --config /root/kubeadm-config.yaml -$ systemctl restart kubelet - -# 设置k8s-master01以及k8s-master02的HOSTNAME以及地址 -$ export CP0_IP=192.168.20.20 -$ export CP0_HOSTNAME=k8s-master01 -$ export CP1_IP=192.168.20.21 -$ export CP1_HOSTNAME=k8s-master02 - -# etcd集群添加节点 -$ kubectl exec -n kube-system etcd-${CP0_HOSTNAME} -- etcdctl --ca-file /etc/kubernetes/pki/etcd/ca.crt --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --endpoints=https://${CP0_IP}:2379 member add ${CP1_HOSTNAME} https://${CP1_IP}:2380 -$ kubeadm alpha phase etcd local --config /root/kubeadm-config.yaml - -# 启动master节点 -$ kubeadm alpha phase kubeconfig all --config /root/kubeadm-config.yaml -$ kubeadm alpha phase controlplane all --config /root/kubeadm-config.yaml -$ kubeadm alpha phase mark-master --config /root/kubeadm-config.yaml - -# 修改/etc/kubernetes/admin.conf的服务地址指向本机 -$ sed -i "s/192.168.20.20:6443/192.168.20.21:6443/g" /etc/kubernetes/admin.conf -``` - -- 在k8s-master03上把节点加入集群 - -```sh -# 创建相关的证书以及kubelet配置文件 -$ kubeadm alpha phase certs all --config /root/kubeadm-config.yaml -$ kubeadm alpha phase kubeconfig controller-manager --config /root/kubeadm-config.yaml -$ kubeadm alpha phase kubeconfig scheduler --config /root/kubeadm-config.yaml -$ kubeadm alpha phase kubelet config write-to-disk --config /root/kubeadm-config.yaml -$ kubeadm alpha phase kubelet write-env-file --config /root/kubeadm-config.yaml -$ kubeadm alpha phase kubeconfig kubelet --config /root/kubeadm-config.yaml -$ systemctl restart kubelet - -# 设置k8s-master01以及k8s-master03的HOSTNAME以及地址 -$ export CP0_IP=192.168.20.20 -$ export CP0_HOSTNAME=k8s-master01 -$ export CP2_IP=192.168.20.22 -$ export CP2_HOSTNAME=k8s-master03 - -# etcd集群添加节点 -$ kubectl exec -n kube-system etcd-${CP0_HOSTNAME} -- etcdctl --ca-file /etc/kubernetes/pki/etcd/ca.crt --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --endpoints=https://${CP0_IP}:2379 member add ${CP2_HOSTNAME} https://${CP2_IP}:2380 -$ kubeadm alpha phase etcd local --config /root/kubeadm-config.yaml - -# 启动master节点 -$ kubeadm alpha phase kubeconfig all --config /root/kubeadm-config.yaml -$ kubeadm alpha phase controlplane all --config /root/kubeadm-config.yaml -$ kubeadm alpha phase mark-master --config /root/kubeadm-config.yaml - -# 修改/etc/kubernetes/admin.conf的服务地址指向本机 -$ sed -i "s/192.168.20.20:6443/192.168.20.22:6443/g" /etc/kubernetes/admin.conf -``` - -- 在所有master节点上允许hpa通过接口采集数据,修改`/etc/kubernetes/manifests/kube-controller-manager.yaml` - -```sh -$ vi /etc/kubernetes/manifests/kube-controller-manager.yaml - - --horizontal-pod-autoscaler-use-rest-clients=false -``` - -- 在所有master上允许istio的自动注入,修改`/etc/kubernetes/manifests/kube-apiserver.yaml` - -```sh -$ vi /etc/kubernetes/manifests/kube-apiserver.yaml - - --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota - -# 重启服务 -systemctl restart kubelet -``` - -- 在任意master节点上安装calico,安装calico网络组件后,nodes状态才会恢复正常 - -```sh -$ kubectl apply -f calico/ -``` - ---- - -[返回目录](#目录) - -### master负载均衡设置 - -#### keepalived安装配置 - -- 在所有master节点上重启keepalived - -```sh -$ systemctl restart keepalived -$ systemctl status keepalived - -# 检查keepalived的vip是否生效 -$ curl -k https://k8s-master-lb:6443 -``` - ---- - -[返回目录](#目录) - -#### nginx负载均衡配置 - -- 在所有master节点上启动nginx-lb - -```sh -# 使用docker-compose启动nginx负载均衡 -$ docker-compose --file=/root/nginx-lb/docker-compose.yaml up -d -$ docker-compose --file=/root/nginx-lb/docker-compose.yaml ps - -# 验证负载均衡的16443端口是否生效 -$ curl -k https://k8s-master-lb:16443 -``` - ---- - -[返回目录](#目录) - -#### kube-proxy高可用设置 - -- 在任意master节点上设置kube-proxy高可用 - -```sh -# 修改kube-proxy的configmap,把server指向load-balance地址和端口 -$ kubectl edit -n kube-system configmap/kube-proxy - server: https://192.168.20.10:16443 -``` - -- 在任意master节点上重启kube-proxy - -```sh -# 查找对应的kube-proxy pods -$ kubectl get pods --all-namespaces -o wide | grep proxy - -# 删除并重启对应的kube-proxy pods -$ kubectl delete pod -n kube-system kube-proxy-XXX -``` - ---- - -[返回目录](#目录) - -#### 验证高可用状态 - -- 在任意master节点上验证服务启动情况 - -```sh -# 检查节点情况 -$ kubectl get nodes -NAME STATUS ROLES AGE VERSION -k8s-master01 Ready master 1h v1.11.1 -k8s-master02 Ready master 58m v1.11.1 -k8s-master03 Ready master 55m v1.11.1 - -# 检查pods运行情况 -$ kubectl get pods -n kube-system -o wide -NAME READY STATUS RESTARTS AGE IP NODE -calico-node-nxskr 2/2 Running 0 46m 192.168.20.22 k8s-master03 -calico-node-xv5xt 2/2 Running 0 46m 192.168.20.20 k8s-master01 -calico-node-zsmgp 2/2 Running 0 46m 192.168.20.21 k8s-master02 -coredns-78fcdf6894-kfzc7 1/1 Running 0 1h 172.168.2.3 k8s-master03 -coredns-78fcdf6894-t957l 1/1 Running 0 46m 172.168.1.2 k8s-master02 -etcd-k8s-master01 1/1 Running 0 1h 192.168.20.20 k8s-master01 -etcd-k8s-master02 1/1 Running 0 58m 192.168.20.21 k8s-master02 -etcd-k8s-master03 1/1 Running 0 54m 192.168.20.22 k8s-master03 -kube-apiserver-k8s-master01 1/1 Running 0 52m 192.168.20.20 k8s-master01 -kube-apiserver-k8s-master02 1/1 Running 0 52m 192.168.20.21 k8s-master02 -kube-apiserver-k8s-master03 1/1 Running 0 51m 192.168.20.22 k8s-master03 -kube-controller-manager-k8s-master01 1/1 Running 0 34m 192.168.20.20 k8s-master01 -kube-controller-manager-k8s-master02 1/1 Running 0 33m 192.168.20.21 k8s-master02 -kube-controller-manager-k8s-master03 1/1 Running 0 33m 192.168.20.22 k8s-master03 -kube-proxy-g9749 1/1 Running 0 36m 192.168.20.22 k8s-master03 -kube-proxy-lhzhb 1/1 Running 0 35m 192.168.20.20 k8s-master01 -kube-proxy-x8jwt 1/1 Running 0 36m 192.168.20.21 k8s-master02 -kube-scheduler-k8s-master01 1/1 Running 1 1h 192.168.20.20 k8s-master01 -kube-scheduler-k8s-master02 1/1 Running 0 57m 192.168.20.21 k8s-master02 -kube-scheduler-k8s-master03 1/1 Running 1 54m 192.168.20.22 k8s-master03 -``` - ---- - -[返回目录](#目录) - -#### 基础组件安装 - -- 在任意master节点上允许master上部署pod - -```sh -$ kubectl taint nodes --all node-role.kubernetes.io/master- -``` - -- 在任意master节点上安装metrics-server,从v1.11.0开始,性能采集不再采用heapster采集pod性能数据,而是使用metrics-server - -```sh -$ kubectl apply -f metrics-server/ - -# 等待5分钟,查看性能数据是否正常收集 -$ kubectl top pods -n kube-system -NAME CPU(cores) MEMORY(bytes) -calico-node-wkstv 47m 113Mi -calico-node-x2sn5 36m 104Mi -calico-node-xnh6s 32m 106Mi -coredns-78fcdf6894-2xc6s 14m 30Mi -coredns-78fcdf6894-rk6ch 10m 22Mi -kube-apiserver-k8s-master01 163m 816Mi -kube-apiserver-k8s-master02 79m 617Mi -kube-apiserver-k8s-master03 73m 614Mi -kube-controller-manager-k8s-master01 52m 141Mi -kube-controller-manager-k8s-master02 0m 14Mi -kube-controller-manager-k8s-master03 0m 13Mi -kube-proxy-269t2 4m 21Mi -kube-proxy-6jc8n 9m 37Mi -kube-proxy-7n8xb 9m 39Mi -kube-scheduler-k8s-master01 20m 25Mi -kube-scheduler-k8s-master02 15m 19Mi -kube-scheduler-k8s-master03 15m 19Mi -metrics-server-77b77f5fc6-jm8t6 3m 43Mi -``` - -- 在任意master节点上安装heapster,从v1.11.0开始,性能采集不再采用heapster采集pod性能数据,而是使用metrics-server,但是dashboard依然使用heapster呈现性能数据 - -```sh -# 安装heapster,需要等待5分钟,等待性能数据采集 -$ kubectl apply -f heapster/ -``` - -- 在任意master节点上安装dashboard - -```sh -# 安装dashboard -$ kubectl apply -f dashboard/ -``` - -> 成功安装后访问以下网址打开dashboard的登录界面,该界面提示需要登录token: https://k8s-master-lb:30000/ - -![dashboard-login](images/dashboard-login.png) - -- 在任意master节点上获取dashboard的登录token - -```sh -# 获取dashboard的登录token -$ kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}') -``` - -> 使用token进行登录,进入后可以看到heapster采集的各个pod以及节点的性能数据 - -![dashboard](images/dashboard.png) - -- 在任意master节点上安装traefik - -```sh -# 创建k8s-master-lb域名的证书 -$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=k8s-master-lb" - -# 把证书写入到secret -kubectl -n kube-system create secret generic traefik-cert --from-file=tls.key --from-file=tls.crt - -# 安装traefik -$ kubectl apply -f traefik/ -``` - -> 成功安装后访问以下网址打开traefik管理界面: http://k8s-master-lb:30011/ - -![traefik](images/traefik.png) - -- 在任意master节点上安装istio - -```sh -# 安装istio -$ kubectl apply -f istio/ - -# 检查istio服务相关pods -$ kubectl get pods -n istio-system -NAME READY STATUS RESTARTS AGE -grafana-69c856fc69-jbx49 1/1 Running 1 21m -istio-citadel-7c4fc8957b-vdbhp 1/1 Running 1 21m -istio-cleanup-secrets-5g95n 0/1 Completed 0 21m -istio-egressgateway-64674bd988-44fg8 1/1 Running 0 18m -istio-egressgateway-64674bd988-dgvfm 1/1 Running 1 16m -istio-egressgateway-64674bd988-fprtc 1/1 Running 0 18m -istio-egressgateway-64674bd988-kl6pw 1/1 Running 3 16m -istio-egressgateway-64674bd988-nphpk 1/1 Running 3 16m -istio-galley-595b94cddf-c5ctw 1/1 Running 70 21m -istio-grafana-post-install-nhs47 0/1 Completed 0 21m -istio-ingressgateway-4vtk5 1/1 Running 2 21m -istio-ingressgateway-5rscp 1/1 Running 3 21m -istio-ingressgateway-6z95f 1/1 Running 3 21m -istio-policy-589977bff5-jx5fd 2/2 Running 3 21m -istio-policy-589977bff5-n74q8 2/2 Running 3 21m -istio-sidecar-injector-86c4d57d56-mfnbp 1/1 Running 39 21m -istio-statsd-prom-bridge-5698d5798c-xdpp6 1/1 Running 1 21m -istio-telemetry-85d6475bfd-8lvsm 2/2 Running 2 21m -istio-telemetry-85d6475bfd-bfjsn 2/2 Running 2 21m -istio-telemetry-85d6475bfd-d9ld9 2/2 Running 2 21m -istio-tracing-bd5765b5b-cmszp 1/1 Running 1 21m -prometheus-77c5fc7cd-zf7zr 1/1 Running 1 21m -servicegraph-6b99c87849-l6zm6 1/1 Running 1 21m -``` - -- 在任意master节点上安装prometheus - -```sh -# 安装prometheus -$ kubectl apply -f prometheus/ -``` - -> 成功安装后访问以下网址打开prometheus管理界面,查看相关性能采集数据: http://k8s-master-lb:30013/ - -![prometheus](images/prometheus.png) - -> 成功安装后访问以下网址打开grafana管理界面(账号密码都是`admin`),查看相关性能采集数据: http://k8s-master-lb:30006/ -> 登录后,进入datasource设置界面,增加prometheus数据源,http://k8s-master-lb:30006/datasources - -![grafana-datasource](images/grafana-datasource.png) - -> 进入导入dashboard界面: http://k8s-master-lb:30006/dashboard/import 导入`heapster/grafana-dashboard`目录下的dashboard `Kubernetes App Metrics`和`Kubernetes cluster monitoring (via Prometheus)` - -![grafana-import](images/grafana-import.png) - -> 导入的dashboard性能呈现如下图: - -![grafana-cluster](images/grafana-cluster.png) - -![grafana-app](images/grafana-app.png) - ---- - -[返回目录](#目录) - -### worker节点设置 - -#### worker加入高可用集群 - -- 在所有workers节点上,使用kubeadm join加入kubernetes集群 - -```sh -# 清理节点上的kubernetes配置信息 -$ kubeadm reset - -# 使用之前kubeadm init执行结果记录的${YOUR_TOKEN}以及${YOUR_DISCOVERY_TOKEN_CA_CERT_HASH},把worker节点加入到集群 -$ kubeadm join 192.168.20.20:6443 --token ${YOUR_TOKEN} --discovery-token-ca-cert-hash sha256:${YOUR_DISCOVERY_TOKEN_CA_CERT_HASH} - - -# 在workers上修改kubernetes集群设置,让server指向nginx负载均衡的ip和端口 -$ sed -i "s/192.168.20.20:6443/192.168.20.10:16443/g" /etc/kubernetes/bootstrap-kubelet.conf -$ sed -i "s/192.168.20.20:6443/192.168.20.10:16443/g" /etc/kubernetes/kubelet.conf - -# 重启本节点 -$ systemctl restart docker kubelet -``` - -- 在任意master节点上验证节点状态 - -```sh -$ kubectl get nodes -NAME STATUS ROLES AGE VERSION -k8s-master01 Ready master 1h v1.11.1 -k8s-master02 Ready master 58m v1.11.1 -k8s-master03 Ready master 55m v1.11.1 -k8s-node01 Ready 30m v1.11.1 -k8s-node02 Ready 24m v1.11.1 -k8s-node03 Ready 22m v1.11.1 -k8s-node04 Ready 22m v1.11.1 -k8s-node05 Ready 16m v1.11.1 -k8s-node06 Ready 13m v1.11.1 -k8s-node07 Ready 11m v1.11.1 -k8s-node08 Ready 10m v1.11.1 -``` - ---- - -[返回目录](#目录) - -### 集群验证 - -#### 验证集群高可用设置 - -- 验证集群高可用 - -```sh -# 创建一个replicas=3的nginx deployment -$ kubectl run nginx --image=nginx --replicas=3 --port=80 -deployment "nginx" created - -# 检查nginx pod的创建情况 -$ kubectl get pods -l=run=nginx -o wide -NAME READY STATUS RESTARTS AGE IP NODE -nginx-58b94844fd-jvlqh 1/1 Running 0 9s 172.168.7.2 k8s-node05 -nginx-58b94844fd-mkt72 1/1 Running 0 9s 172.168.9.2 k8s-node07 -nginx-58b94844fd-xhb8x 1/1 Running 0 9s 172.168.11.2 k8s-node09 - -# 创建nginx的NodePort service -$ kubectl expose deployment nginx --type=NodePort --port=80 -service "nginx" exposed - -# 检查nginx service的创建情况 -$ kubectl get svc -l=run=nginx -o wide -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR -nginx NodePort 10.106.129.121 80:31443/TCP 7s run=nginx - -# 检查nginx NodePort service是否正常提供服务 -$ curl k8s-master-lb:31443 - - - -Welcome to nginx! - - - -

Welcome to nginx!

-

If you see this page, the nginx web server is successfully installed and -working. Further configuration is required.

- -

For online documentation and support please refer to -nginx.org.
-Commercial support is available at -nginx.com.

- -

Thank you for using nginx.

- - -``` - -- pod之间互访测试 - -```sh -# 启动一个client测试nginx是否可以访问 -kubectl run nginx-client -ti --rm --image=alpine -- ash -/ # wget -O - nginx -Connecting to nginx (10.102.101.78:80) -index.html 100% |*****************************************| 612 0:00:00 ETA - - - - -Welcome to nginx! - - - -

Welcome to nginx!

-

If you see this page, the nginx web server is successfully installed and -working. Further configuration is required.

- -

For online documentation and support please refer to -nginx.org.
-Commercial support is available at -nginx.com.

- -

Thank you for using nginx.

- - - -# 清除nginx的deployment以及service -kubectl delete deploy,svc nginx -``` - -- 测试HPA自动扩展 - -```sh -# 创建测试服务 -kubectl run nginx-server --requests=cpu=10m --image=nginx --port=80 -kubectl expose deployment nginx-server --port=80 - -# 创建hpa -kubectl autoscale deployment nginx-server --cpu-percent=10 --min=1 --max=10 -kubectl get hpa -kubectl describe hpa nginx-server - -# 给测试服务增加负载 -kubectl run -ti --rm load-generator --image=busybox -- ash -wget -q -O- http://nginx-server.default.svc.cluster.local > /dev/null -while true; do wget -q -O- http://nginx-server.default.svc.cluster.local > /dev/null; done - -# 检查hpa自动扩展情况,一般需要等待几分钟。结束增加负载后,pod自动缩容(自动缩容需要大概10-15分钟) -kubectl get hpa -w - -# 删除测试数据 -kubectl delete deploy,svc,hpa nginx-server -``` - ---- - -[返回目录](#目录) - -- 至此kubernetes高可用集群完成部署,并测试通过 😃 diff --git a/calico/calico.yaml b/calico/calico.yaml deleted file mode 100644 index 2cc0197..0000000 --- a/calico/calico.yaml +++ /dev/null @@ -1,470 +0,0 @@ -# Calico Version v3.1.3 -# https://docs.projectcalico.org/v3.1/releases#v3.1.3 -# This manifest includes the following component versions: -# calico/node:v3.1.3 -# calico/cni:v3.1.3 - -# This ConfigMap is used to configure a self-hosted Calico installation. -kind: ConfigMap -apiVersion: v1 -metadata: - name: calico-config - namespace: kube-system -data: - # To enable Typha, set this to "calico-typha" *and* set a non-zero value for Typha replicas - # below. We recommend using Typha if you have more than 50 nodes. Above 100 nodes it is - # essential. - typha_service_name: "none" - - # The CNI network configuration to install on each node. - cni_network_config: |- - { - "name": "k8s-pod-network", - "cniVersion": "0.3.0", - "plugins": [ - { - "type": "calico", - "log_level": "info", - "datastore_type": "kubernetes", - "nodename": "__KUBERNETES_NODE_NAME__", - "mtu": 1500, - "ipam": { - "type": "host-local", - "subnet": "usePodCidr" - }, - "policy": { - "type": "k8s" - }, - "kubernetes": { - "kubeconfig": "__KUBECONFIG_FILEPATH__" - } - }, - { - "type": "portmap", - "snat": true, - "capabilities": {"portMappings": true} - } - ] - } - ---- - -# This manifest creates a Service, which will be backed by Calico's Typha daemon. -# Typha sits in between Felix and the API server, reducing Calico's load on the API server. - -apiVersion: v1 -kind: Service -metadata: - name: calico-typha - namespace: kube-system - labels: - k8s-app: calico-typha -spec: - ports: - - port: 5473 - protocol: TCP - targetPort: calico-typha - name: calico-typha - selector: - k8s-app: calico-typha - ---- - -# This manifest creates a Deployment of Typha to back the above service. - -apiVersion: apps/v1beta1 -kind: Deployment -metadata: - name: calico-typha - namespace: kube-system - labels: - k8s-app: calico-typha -spec: - # Number of Typha replicas. To enable Typha, set this to a non-zero value *and* set the - # typha_service_name variable in the calico-config ConfigMap above. - # - # We recommend using Typha if you have more than 50 nodes. Above 100 nodes it is essential - # (when using the Kubernetes datastore). Use one replica for every 100-200 nodes. In - # production, we recommend running at least 3 replicas to reduce the impact of rolling upgrade. - replicas: 0 - revisionHistoryLimit: 2 - template: - metadata: - labels: - k8s-app: calico-typha - annotations: - # This, along with the CriticalAddonsOnly toleration below, marks the pod as a critical - # add-on, ensuring it gets priority scheduling and that its resources are reserved - # if it ever gets evicted. - scheduler.alpha.kubernetes.io/critical-pod: '' - spec: - hostNetwork: true - tolerations: - # Mark the pod as a critical add-on for rescheduling. - - key: CriticalAddonsOnly - operator: Exists - # Since Calico can't network a pod until Typha is up, we need to run Typha itself - # as a host-networked pod. - serviceAccountName: calico-node - containers: - - image: quay.io/calico/typha:v0.7.4 - name: calico-typha - ports: - - containerPort: 5473 - name: calico-typha - protocol: TCP - env: - # Enable "info" logging by default. Can be set to "debug" to increase verbosity. - - name: TYPHA_LOGSEVERITYSCREEN - value: "info" - # Disable logging to file and syslog since those don't make sense in Kubernetes. - - name: TYPHA_LOGFILEPATH - value: "none" - - name: TYPHA_LOGSEVERITYSYS - value: "none" - # Monitor the Kubernetes API to find the number of running instances and rebalance - # connections. - - name: TYPHA_CONNECTIONREBALANCINGMODE - value: "kubernetes" - - name: TYPHA_DATASTORETYPE - value: "kubernetes" - - name: TYPHA_HEALTHENABLED - value: "true" - # Uncomment these lines to enable prometheus metrics. Since Typha is host-networked, - # this opens a port on the host, which may need to be secured. - #- name: TYPHA_PROMETHEUSMETRICSENABLED - # value: "true" - #- name: TYPHA_PROMETHEUSMETRICSPORT - # value: "9093" - livenessProbe: - httpGet: - path: /liveness - port: 9098 - periodSeconds: 30 - initialDelaySeconds: 30 - readinessProbe: - httpGet: - path: /readiness - port: 9098 - periodSeconds: 10 - ---- - -# This manifest installs the calico/node container, as well -# as the Calico CNI plugins and network config on -# each master and worker node in a Kubernetes cluster. -kind: DaemonSet -apiVersion: extensions/v1beta1 -metadata: - name: calico-node - namespace: kube-system - labels: - k8s-app: calico-node -spec: - selector: - matchLabels: - k8s-app: calico-node - updateStrategy: - type: RollingUpdate - rollingUpdate: - maxUnavailable: 1 - template: - metadata: - labels: - k8s-app: calico-node - annotations: - # This, along with the CriticalAddonsOnly toleration below, - # marks the pod as a critical add-on, ensuring it gets - # priority scheduling and that its resources are reserved - # if it ever gets evicted. - scheduler.alpha.kubernetes.io/critical-pod: '' - spec: - hostNetwork: true - tolerations: - # Make sure calico/node gets scheduled on all nodes. - - effect: NoSchedule - operator: Exists - # Mark the pod as a critical add-on for rescheduling. - - key: CriticalAddonsOnly - operator: Exists - - effect: NoExecute - operator: Exists - serviceAccountName: calico-node - # Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force - # deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods. - terminationGracePeriodSeconds: 0 - containers: - # Runs calico/node container on each Kubernetes node. This - # container programs network policy and routes on each - # host. - - name: calico-node - image: quay.io/calico/node:v3.1.3 - env: - # Use Kubernetes API as the backing datastore. - - name: DATASTORE_TYPE - value: "kubernetes" - # Enable felix info logging. - - name: FELIX_LOGSEVERITYSCREEN - value: "info" - # Cluster type to identify the deployment type - - name: CLUSTER_TYPE - value: "k8s,bgp" - # Disable file logging so `kubectl logs` works. - - name: CALICO_DISABLE_FILE_LOGGING - value: "true" - # Set Felix endpoint to host default action to ACCEPT. - - name: FELIX_DEFAULTENDPOINTTOHOSTACTION - value: "ACCEPT" - # Disable IPV6 on Kubernetes. - - name: FELIX_IPV6SUPPORT - value: "false" - # Set MTU for tunnel device used if ipip is enabled - - name: FELIX_IPINIPMTU - value: "1440" - # Wait for the datastore. - - name: WAIT_FOR_DATASTORE - value: "true" - # The default IPv4 pool to create on startup if none exists. Pod IPs will be - # chosen from this range. Changing this value after installation will have - # no effect. This should fall within `--cluster-cidr`. - - name: CALICO_IPV4POOL_CIDR - value: "172.168.0.0/16" - # Enable IPIP - - name: CALICO_IPV4POOL_IPIP - value: "Always" - # Enable IP-in-IP within Felix. - - name: FELIX_IPINIPENABLED - value: "true" - # Typha support: controlled by the ConfigMap. - - name: FELIX_TYPHAK8SSERVICENAME - valueFrom: - configMapKeyRef: - name: calico-config - key: typha_service_name - # Set based on the k8s node name. - - name: NODENAME - valueFrom: - fieldRef: - fieldPath: spec.nodeName - # Auto-detect the BGP IP address. - - name: IP - value: "autodetect" - - name: IP_AUTODETECTION_METHOD - value: "can-reach=192.168.0.1" - - name: FELIX_HEALTHENABLED - value: "true" - securityContext: - privileged: true - resources: - requests: - cpu: 250m - livenessProbe: - httpGet: - path: /liveness - port: 9099 - periodSeconds: 10 - initialDelaySeconds: 10 - failureThreshold: 6 - readinessProbe: - httpGet: - path: /readiness - port: 9099 - periodSeconds: 10 - volumeMounts: - - mountPath: /lib/modules - name: lib-modules - readOnly: true - - mountPath: /var/run/calico - name: var-run-calico - readOnly: false - - mountPath: /var/lib/calico - name: var-lib-calico - readOnly: false - # This container installs the Calico CNI binaries - # and CNI network config file on each node. - - name: install-cni - image: quay.io/calico/cni:v3.1.3 - command: ["/install-cni.sh"] - env: - # Name of the CNI config file to create. - - name: CNI_CONF_NAME - value: "10-calico.conflist" - # The CNI network config to install on each node. - - name: CNI_NETWORK_CONFIG - valueFrom: - configMapKeyRef: - name: calico-config - key: cni_network_config - # Set the hostname based on the k8s node name. - - name: KUBERNETES_NODE_NAME - valueFrom: - fieldRef: - fieldPath: spec.nodeName - volumeMounts: - - mountPath: /host/opt/cni/bin - name: cni-bin-dir - - mountPath: /host/etc/cni/net.d - name: cni-net-dir - volumes: - # Used by calico/node. - - name: lib-modules - hostPath: - path: /lib/modules - - name: var-run-calico - hostPath: - path: /var/run/calico - - name: var-lib-calico - hostPath: - path: /var/lib/calico - # Used to install CNI. - - name: cni-bin-dir - hostPath: - path: /opt/cni/bin - - name: cni-net-dir - hostPath: - path: /etc/cni/net.d - -# Create all the CustomResourceDefinitions needed for -# Calico policy and networking mode. ---- - -apiVersion: apiextensions.k8s.io/v1beta1 -kind: CustomResourceDefinition -metadata: - name: felixconfigurations.crd.projectcalico.org -spec: - scope: Cluster - group: crd.projectcalico.org - version: v1 - names: - kind: FelixConfiguration - plural: felixconfigurations - singular: felixconfiguration - ---- - -apiVersion: apiextensions.k8s.io/v1beta1 -kind: CustomResourceDefinition -metadata: - name: bgppeers.crd.projectcalico.org -spec: - scope: Cluster - group: crd.projectcalico.org - version: v1 - names: - kind: BGPPeer - plural: bgppeers - singular: bgppeer - ---- - -apiVersion: apiextensions.k8s.io/v1beta1 -kind: CustomResourceDefinition -metadata: - name: bgpconfigurations.crd.projectcalico.org -spec: - scope: Cluster - group: crd.projectcalico.org - version: v1 - names: - kind: BGPConfiguration - plural: bgpconfigurations - singular: bgpconfiguration - ---- - -apiVersion: apiextensions.k8s.io/v1beta1 -kind: CustomResourceDefinition -metadata: - name: ippools.crd.projectcalico.org -spec: - scope: Cluster - group: crd.projectcalico.org - version: v1 - names: - kind: IPPool - plural: ippools - singular: ippool - ---- - -apiVersion: apiextensions.k8s.io/v1beta1 -kind: CustomResourceDefinition -metadata: - name: hostendpoints.crd.projectcalico.org -spec: - scope: Cluster - group: crd.projectcalico.org - version: v1 - names: - kind: HostEndpoint - plural: hostendpoints - singular: hostendpoint - ---- - -apiVersion: apiextensions.k8s.io/v1beta1 -kind: CustomResourceDefinition -metadata: - name: clusterinformations.crd.projectcalico.org -spec: - scope: Cluster - group: crd.projectcalico.org - version: v1 - names: - kind: ClusterInformation - plural: clusterinformations - singular: clusterinformation - ---- - -apiVersion: apiextensions.k8s.io/v1beta1 -kind: CustomResourceDefinition -metadata: - name: globalnetworkpolicies.crd.projectcalico.org -spec: - scope: Cluster - group: crd.projectcalico.org - version: v1 - names: - kind: GlobalNetworkPolicy - plural: globalnetworkpolicies - singular: globalnetworkpolicy - ---- - -apiVersion: apiextensions.k8s.io/v1beta1 -kind: CustomResourceDefinition -metadata: - name: globalnetworksets.crd.projectcalico.org -spec: - scope: Cluster - group: crd.projectcalico.org - version: v1 - names: - kind: GlobalNetworkSet - plural: globalnetworksets - singular: globalnetworkset - ---- - -apiVersion: apiextensions.k8s.io/v1beta1 -kind: CustomResourceDefinition -metadata: - name: networkpolicies.crd.projectcalico.org -spec: - scope: Namespaced - group: crd.projectcalico.org - version: v1 - names: - kind: NetworkPolicy - plural: networkpolicies - singular: networkpolicy - ---- - -apiVersion: v1 -kind: ServiceAccount -metadata: - name: calico-node - namespace: kube-system diff --git a/calico/calico.yaml.tpl b/calico/calico.yaml.tpl deleted file mode 100644 index 68ac619..0000000 --- a/calico/calico.yaml.tpl +++ /dev/null @@ -1,470 +0,0 @@ -# Calico Version v3.1.3 -# https://docs.projectcalico.org/v3.1/releases#v3.1.3 -# This manifest includes the following component versions: -# calico/node:v3.1.3 -# calico/cni:v3.1.3 - -# This ConfigMap is used to configure a self-hosted Calico installation. -kind: ConfigMap -apiVersion: v1 -metadata: - name: calico-config - namespace: kube-system -data: - # To enable Typha, set this to "calico-typha" *and* set a non-zero value for Typha replicas - # below. We recommend using Typha if you have more than 50 nodes. Above 100 nodes it is - # essential. - typha_service_name: "none" - - # The CNI network configuration to install on each node. - cni_network_config: |- - { - "name": "k8s-pod-network", - "cniVersion": "0.3.0", - "plugins": [ - { - "type": "calico", - "log_level": "info", - "datastore_type": "kubernetes", - "nodename": "__KUBERNETES_NODE_NAME__", - "mtu": 1500, - "ipam": { - "type": "host-local", - "subnet": "usePodCidr" - }, - "policy": { - "type": "k8s" - }, - "kubernetes": { - "kubeconfig": "__KUBECONFIG_FILEPATH__" - } - }, - { - "type": "portmap", - "snat": true, - "capabilities": {"portMappings": true} - } - ] - } - ---- - -# This manifest creates a Service, which will be backed by Calico's Typha daemon. -# Typha sits in between Felix and the API server, reducing Calico's load on the API server. - -apiVersion: v1 -kind: Service -metadata: - name: calico-typha - namespace: kube-system - labels: - k8s-app: calico-typha -spec: - ports: - - port: 5473 - protocol: TCP - targetPort: calico-typha - name: calico-typha - selector: - k8s-app: calico-typha - ---- - -# This manifest creates a Deployment of Typha to back the above service. - -apiVersion: apps/v1beta1 -kind: Deployment -metadata: - name: calico-typha - namespace: kube-system - labels: - k8s-app: calico-typha -spec: - # Number of Typha replicas. To enable Typha, set this to a non-zero value *and* set the - # typha_service_name variable in the calico-config ConfigMap above. - # - # We recommend using Typha if you have more than 50 nodes. Above 100 nodes it is essential - # (when using the Kubernetes datastore). Use one replica for every 100-200 nodes. In - # production, we recommend running at least 3 replicas to reduce the impact of rolling upgrade. - replicas: 0 - revisionHistoryLimit: 2 - template: - metadata: - labels: - k8s-app: calico-typha - annotations: - # This, along with the CriticalAddonsOnly toleration below, marks the pod as a critical - # add-on, ensuring it gets priority scheduling and that its resources are reserved - # if it ever gets evicted. - scheduler.alpha.kubernetes.io/critical-pod: '' - spec: - hostNetwork: true - tolerations: - # Mark the pod as a critical add-on for rescheduling. - - key: CriticalAddonsOnly - operator: Exists - # Since Calico can't network a pod until Typha is up, we need to run Typha itself - # as a host-networked pod. - serviceAccountName: calico-node - containers: - - image: quay.io/calico/typha:v0.7.4 - name: calico-typha - ports: - - containerPort: 5473 - name: calico-typha - protocol: TCP - env: - # Enable "info" logging by default. Can be set to "debug" to increase verbosity. - - name: TYPHA_LOGSEVERITYSCREEN - value: "info" - # Disable logging to file and syslog since those don't make sense in Kubernetes. - - name: TYPHA_LOGFILEPATH - value: "none" - - name: TYPHA_LOGSEVERITYSYS - value: "none" - # Monitor the Kubernetes API to find the number of running instances and rebalance - # connections. - - name: TYPHA_CONNECTIONREBALANCINGMODE - value: "kubernetes" - - name: TYPHA_DATASTORETYPE - value: "kubernetes" - - name: TYPHA_HEALTHENABLED - value: "true" - # Uncomment these lines to enable prometheus metrics. Since Typha is host-networked, - # this opens a port on the host, which may need to be secured. - #- name: TYPHA_PROMETHEUSMETRICSENABLED - # value: "true" - #- name: TYPHA_PROMETHEUSMETRICSPORT - # value: "9093" - livenessProbe: - httpGet: - path: /liveness - port: 9098 - periodSeconds: 30 - initialDelaySeconds: 30 - readinessProbe: - httpGet: - path: /readiness - port: 9098 - periodSeconds: 10 - ---- - -# This manifest installs the calico/node container, as well -# as the Calico CNI plugins and network config on -# each master and worker node in a Kubernetes cluster. -kind: DaemonSet -apiVersion: extensions/v1beta1 -metadata: - name: calico-node - namespace: kube-system - labels: - k8s-app: calico-node -spec: - selector: - matchLabels: - k8s-app: calico-node - updateStrategy: - type: RollingUpdate - rollingUpdate: - maxUnavailable: 1 - template: - metadata: - labels: - k8s-app: calico-node - annotations: - # This, along with the CriticalAddonsOnly toleration below, - # marks the pod as a critical add-on, ensuring it gets - # priority scheduling and that its resources are reserved - # if it ever gets evicted. - scheduler.alpha.kubernetes.io/critical-pod: '' - spec: - hostNetwork: true - tolerations: - # Make sure calico/node gets scheduled on all nodes. - - effect: NoSchedule - operator: Exists - # Mark the pod as a critical add-on for rescheduling. - - key: CriticalAddonsOnly - operator: Exists - - effect: NoExecute - operator: Exists - serviceAccountName: calico-node - # Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force - # deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods. - terminationGracePeriodSeconds: 0 - containers: - # Runs calico/node container on each Kubernetes node. This - # container programs network policy and routes on each - # host. - - name: calico-node - image: quay.io/calico/node:v3.1.3 - env: - # Use Kubernetes API as the backing datastore. - - name: DATASTORE_TYPE - value: "kubernetes" - # Enable felix info logging. - - name: FELIX_LOGSEVERITYSCREEN - value: "info" - # Cluster type to identify the deployment type - - name: CLUSTER_TYPE - value: "k8s,bgp" - # Disable file logging so `kubectl logs` works. - - name: CALICO_DISABLE_FILE_LOGGING - value: "true" - # Set Felix endpoint to host default action to ACCEPT. - - name: FELIX_DEFAULTENDPOINTTOHOSTACTION - value: "ACCEPT" - # Disable IPV6 on Kubernetes. - - name: FELIX_IPV6SUPPORT - value: "false" - # Set MTU for tunnel device used if ipip is enabled - - name: FELIX_IPINIPMTU - value: "1440" - # Wait for the datastore. - - name: WAIT_FOR_DATASTORE - value: "true" - # The default IPv4 pool to create on startup if none exists. Pod IPs will be - # chosen from this range. Changing this value after installation will have - # no effect. This should fall within `--cluster-cidr`. - - name: CALICO_IPV4POOL_CIDR - value: "K8SHA_CIDR/16" - # Enable IPIP - - name: CALICO_IPV4POOL_IPIP - value: "Always" - # Enable IP-in-IP within Felix. - - name: FELIX_IPINIPENABLED - value: "true" - # Typha support: controlled by the ConfigMap. - - name: FELIX_TYPHAK8SSERVICENAME - valueFrom: - configMapKeyRef: - name: calico-config - key: typha_service_name - # Set based on the k8s node name. - - name: NODENAME - valueFrom: - fieldRef: - fieldPath: spec.nodeName - # Auto-detect the BGP IP address. - - name: IP - value: "autodetect" - - name: IP_AUTODETECTION_METHOD - value: "can-reach=K8SHA_CALICO_REACHABLE_IP" - - name: FELIX_HEALTHENABLED - value: "true" - securityContext: - privileged: true - resources: - requests: - cpu: 250m - livenessProbe: - httpGet: - path: /liveness - port: 9099 - periodSeconds: 10 - initialDelaySeconds: 10 - failureThreshold: 6 - readinessProbe: - httpGet: - path: /readiness - port: 9099 - periodSeconds: 10 - volumeMounts: - - mountPath: /lib/modules - name: lib-modules - readOnly: true - - mountPath: /var/run/calico - name: var-run-calico - readOnly: false - - mountPath: /var/lib/calico - name: var-lib-calico - readOnly: false - # This container installs the Calico CNI binaries - # and CNI network config file on each node. - - name: install-cni - image: quay.io/calico/cni:v3.1.3 - command: ["/install-cni.sh"] - env: - # Name of the CNI config file to create. - - name: CNI_CONF_NAME - value: "10-calico.conflist" - # The CNI network config to install on each node. - - name: CNI_NETWORK_CONFIG - valueFrom: - configMapKeyRef: - name: calico-config - key: cni_network_config - # Set the hostname based on the k8s node name. - - name: KUBERNETES_NODE_NAME - valueFrom: - fieldRef: - fieldPath: spec.nodeName - volumeMounts: - - mountPath: /host/opt/cni/bin - name: cni-bin-dir - - mountPath: /host/etc/cni/net.d - name: cni-net-dir - volumes: - # Used by calico/node. - - name: lib-modules - hostPath: - path: /lib/modules - - name: var-run-calico - hostPath: - path: /var/run/calico - - name: var-lib-calico - hostPath: - path: /var/lib/calico - # Used to install CNI. - - name: cni-bin-dir - hostPath: - path: /opt/cni/bin - - name: cni-net-dir - hostPath: - path: /etc/cni/net.d - -# Create all the CustomResourceDefinitions needed for -# Calico policy and networking mode. ---- - -apiVersion: apiextensions.k8s.io/v1beta1 -kind: CustomResourceDefinition -metadata: - name: felixconfigurations.crd.projectcalico.org -spec: - scope: Cluster - group: crd.projectcalico.org - version: v1 - names: - kind: FelixConfiguration - plural: felixconfigurations - singular: felixconfiguration - ---- - -apiVersion: apiextensions.k8s.io/v1beta1 -kind: CustomResourceDefinition -metadata: - name: bgppeers.crd.projectcalico.org -spec: - scope: Cluster - group: crd.projectcalico.org - version: v1 - names: - kind: BGPPeer - plural: bgppeers - singular: bgppeer - ---- - -apiVersion: apiextensions.k8s.io/v1beta1 -kind: CustomResourceDefinition -metadata: - name: bgpconfigurations.crd.projectcalico.org -spec: - scope: Cluster - group: crd.projectcalico.org - version: v1 - names: - kind: BGPConfiguration - plural: bgpconfigurations - singular: bgpconfiguration - ---- - -apiVersion: apiextensions.k8s.io/v1beta1 -kind: CustomResourceDefinition -metadata: - name: ippools.crd.projectcalico.org -spec: - scope: Cluster - group: crd.projectcalico.org - version: v1 - names: - kind: IPPool - plural: ippools - singular: ippool - ---- - -apiVersion: apiextensions.k8s.io/v1beta1 -kind: CustomResourceDefinition -metadata: - name: hostendpoints.crd.projectcalico.org -spec: - scope: Cluster - group: crd.projectcalico.org - version: v1 - names: - kind: HostEndpoint - plural: hostendpoints - singular: hostendpoint - ---- - -apiVersion: apiextensions.k8s.io/v1beta1 -kind: CustomResourceDefinition -metadata: - name: clusterinformations.crd.projectcalico.org -spec: - scope: Cluster - group: crd.projectcalico.org - version: v1 - names: - kind: ClusterInformation - plural: clusterinformations - singular: clusterinformation - ---- - -apiVersion: apiextensions.k8s.io/v1beta1 -kind: CustomResourceDefinition -metadata: - name: globalnetworkpolicies.crd.projectcalico.org -spec: - scope: Cluster - group: crd.projectcalico.org - version: v1 - names: - kind: GlobalNetworkPolicy - plural: globalnetworkpolicies - singular: globalnetworkpolicy - ---- - -apiVersion: apiextensions.k8s.io/v1beta1 -kind: CustomResourceDefinition -metadata: - name: globalnetworksets.crd.projectcalico.org -spec: - scope: Cluster - group: crd.projectcalico.org - version: v1 - names: - kind: GlobalNetworkSet - plural: globalnetworksets - singular: globalnetworkset - ---- - -apiVersion: apiextensions.k8s.io/v1beta1 -kind: CustomResourceDefinition -metadata: - name: networkpolicies.crd.projectcalico.org -spec: - scope: Namespaced - group: crd.projectcalico.org - version: v1 - names: - kind: NetworkPolicy - plural: networkpolicies - singular: networkpolicy - ---- - -apiVersion: v1 -kind: ServiceAccount -metadata: - name: calico-node - namespace: kube-system diff --git a/calico/rbac-kdd.yaml b/calico/rbac-kdd.yaml deleted file mode 100644 index 60d3508..0000000 --- a/calico/rbac-kdd.yaml +++ /dev/null @@ -1,92 +0,0 @@ -# Calico Version v3.1.3 -# https://docs.projectcalico.org/v3.1/releases#v3.1.3 -kind: ClusterRole -apiVersion: rbac.authorization.k8s.io/v1beta1 -metadata: - name: calico-node -rules: - - apiGroups: [""] - resources: - - namespaces - verbs: - - get - - list - - watch - - apiGroups: [""] - resources: - - pods/status - verbs: - - update - - apiGroups: [""] - resources: - - pods - verbs: - - get - - list - - watch - - patch - - apiGroups: [""] - resources: - - services - verbs: - - get - - apiGroups: [""] - resources: - - endpoints - verbs: - - get - - apiGroups: [""] - resources: - - nodes - verbs: - - get - - list - - update - - watch - - apiGroups: ["extensions"] - resources: - - networkpolicies - verbs: - - get - - list - - watch - - apiGroups: ["networking.k8s.io"] - resources: - - networkpolicies - verbs: - - watch - - list - - apiGroups: ["crd.projectcalico.org"] - resources: - - globalfelixconfigs - - felixconfigurations - - bgppeers - - globalbgpconfigs - - bgpconfigurations - - ippools - - globalnetworkpolicies - - globalnetworksets - - networkpolicies - - clusterinformations - - hostendpoints - verbs: - - create - - get - - list - - update - - watch - ---- - -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRoleBinding -metadata: - name: calico-node -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: calico-node -subjects: -- kind: ServiceAccount - name: calico-node - namespace: kube-system diff --git a/calico/upgrade/calico.yaml b/calico/upgrade/calico.yaml deleted file mode 100644 index 96f8415..0000000 --- a/calico/upgrade/calico.yaml +++ /dev/null @@ -1,521 +0,0 @@ -# Calico Version v3.3.2 -# https://docs.projectcalico.org/v3.3/releases#v3.3.2 -# This manifest includes the following component versions: -# calico/node:v3.3.2 -# calico/cni:v3.3.2 - -# This ConfigMap is used to configure a self-hosted Calico installation. -kind: ConfigMap -apiVersion: v1 -metadata: - name: calico-config - namespace: kube-system -data: - # To enable Typha, set this to "calico-typha" *and* set a non-zero value for Typha replicas - # below. We recommend using Typha if you have more than 50 nodes. Above 100 nodes it is - # essential. - typha_service_name: "none" - # Configure the Calico backend to use. - calico_backend: "bird" - - # Configure the MTU to use - veth_mtu: "1440" - - # The CNI network configuration to install on each node. The special - # values in this config will be automatically populated. - cni_network_config: |- - { - "name": "k8s-pod-network", - "cniVersion": "0.3.0", - "plugins": [ - { - "type": "calico", - "log_level": "info", - "datastore_type": "kubernetes", - "nodename": "__KUBERNETES_NODE_NAME__", - "mtu": __CNI_MTU__, - "ipam": { - "type": "host-local", - "subnet": "usePodCidr" - }, - "policy": { - "type": "k8s" - }, - "kubernetes": { - "kubeconfig": "__KUBECONFIG_FILEPATH__" - } - }, - { - "type": "portmap", - "snat": true, - "capabilities": {"portMappings": true} - } - ] - } - ---- - - -# This manifest creates a Service, which will be backed by Calico's Typha daemon. -# Typha sits in between Felix and the API server, reducing Calico's load on the API server. - -apiVersion: v1 -kind: Service -metadata: - name: calico-typha - namespace: kube-system - labels: - k8s-app: calico-typha -spec: - ports: - - port: 5473 - protocol: TCP - targetPort: calico-typha - name: calico-typha - selector: - k8s-app: calico-typha - ---- - -# This manifest creates a Deployment of Typha to back the above service. - -apiVersion: apps/v1beta1 -kind: Deployment -metadata: - name: calico-typha - namespace: kube-system - labels: - k8s-app: calico-typha -spec: - # Number of Typha replicas. To enable Typha, set this to a non-zero value *and* set the - # typha_service_name variable in the calico-config ConfigMap above. - # - # We recommend using Typha if you have more than 50 nodes. Above 100 nodes it is essential - # (when using the Kubernetes datastore). Use one replica for every 100-200 nodes. In - # production, we recommend running at least 3 replicas to reduce the impact of rolling upgrade. - replicas: 0 - revisionHistoryLimit: 2 - template: - metadata: - labels: - k8s-app: calico-typha - annotations: - # This, along with the CriticalAddonsOnly toleration below, marks the pod as a critical - # add-on, ensuring it gets priority scheduling and that its resources are reserved - # if it ever gets evicted. - scheduler.alpha.kubernetes.io/critical-pod: '' - cluster-autoscaler.kubernetes.io/safe-to-evict: 'true' - spec: - hostNetwork: true - tolerations: - # Mark the pod as a critical add-on for rescheduling. - - key: CriticalAddonsOnly - operator: Exists - # Since Calico can't network a pod until Typha is up, we need to run Typha itself - # as a host-networked pod. - serviceAccountName: calico-node - containers: - - image: quay.io/calico/typha:v3.3.2 - name: calico-typha - ports: - - containerPort: 5473 - name: calico-typha - protocol: TCP - env: - # Enable "info" logging by default. Can be set to "debug" to increase verbosity. - - name: TYPHA_LOGSEVERITYSCREEN - value: "info" - # Disable logging to file and syslog since those don't make sense in Kubernetes. - - name: TYPHA_LOGFILEPATH - value: "none" - - name: TYPHA_LOGSEVERITYSYS - value: "none" - # Monitor the Kubernetes API to find the number of running instances and rebalance - # connections. - - name: TYPHA_CONNECTIONREBALANCINGMODE - value: "kubernetes" - - name: TYPHA_DATASTORETYPE - value: "kubernetes" - - name: TYPHA_HEALTHENABLED - value: "true" - # Uncomment these lines to enable prometheus metrics. Since Typha is host-networked, - # this opens a port on the host, which may need to be secured. - #- name: TYPHA_PROMETHEUSMETRICSENABLED - # value: "true" - #- name: TYPHA_PROMETHEUSMETRICSPORT - # value: "9093" - livenessProbe: - exec: - command: - - calico-typha - - check - - liveness - periodSeconds: 30 - initialDelaySeconds: 30 - readinessProbe: - exec: - command: - - calico-typha - - check - - readiness - periodSeconds: 10 - ---- - -# This manifest creates a Pod Disruption Budget for Typha to allow K8s Cluster Autoscaler to evict - -apiVersion: policy/v1beta1 -kind: PodDisruptionBudget -metadata: - name: calico-typha - namespace: kube-system - labels: - k8s-app: calico-typha -spec: - maxUnavailable: 1 - selector: - matchLabels: - k8s-app: calico-typha - ---- - -# This manifest installs the calico/node container, as well -# as the Calico CNI plugins and network config on -# each master and worker node in a Kubernetes cluster. -kind: DaemonSet -apiVersion: extensions/v1beta1 -metadata: - name: calico-node - namespace: kube-system - labels: - k8s-app: calico-node -spec: - selector: - matchLabels: - k8s-app: calico-node - updateStrategy: - type: RollingUpdate - rollingUpdate: - maxUnavailable: 1 - template: - metadata: - labels: - k8s-app: calico-node - annotations: - # This, along with the CriticalAddonsOnly toleration below, - # marks the pod as a critical add-on, ensuring it gets - # priority scheduling and that its resources are reserved - # if it ever gets evicted. - scheduler.alpha.kubernetes.io/critical-pod: '' - spec: - hostNetwork: true - tolerations: - # Make sure calico-node gets scheduled on all nodes. - - effect: NoSchedule - operator: Exists - # Mark the pod as a critical add-on for rescheduling. - - key: CriticalAddonsOnly - operator: Exists - - effect: NoExecute - operator: Exists - serviceAccountName: calico-node - # Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force - # deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods. - terminationGracePeriodSeconds: 0 - containers: - # Runs calico/node container on each Kubernetes node. This - # container programs network policy and routes on each - # host. - - name: calico-node - image: quay.io/calico/node:v3.3.2 - env: - # Use Kubernetes API as the backing datastore. - - name: DATASTORE_TYPE - value: "kubernetes" - # Typha support: controlled by the ConfigMap. - - name: FELIX_TYPHAK8SSERVICENAME - valueFrom: - configMapKeyRef: - name: calico-config - key: typha_service_name - # Wait for the datastore. - - name: WAIT_FOR_DATASTORE - value: "true" - # Set based on the k8s node name. - - name: NODENAME - valueFrom: - fieldRef: - fieldPath: spec.nodeName - # Choose the backend to use. - - name: CALICO_NETWORKING_BACKEND - valueFrom: - configMapKeyRef: - name: calico-config - key: calico_backend - # Cluster type to identify the deployment type - - name: CLUSTER_TYPE - value: "k8s,bgp" - # Auto-detect the BGP IP address. - - name: IP - value: "autodetect" - # Enable IPIP - - name: CALICO_IPV4POOL_IPIP - value: "Always" - # Set MTU for tunnel device used if ipip is enabled - - name: FELIX_IPINIPMTU - valueFrom: - configMapKeyRef: - name: calico-config - key: veth_mtu - # The default IPv4 pool to create on startup if none exists. Pod IPs will be - # chosen from this range. Changing this value after installation will have - # no effect. This should fall within `--cluster-cidr`. - - name: CALICO_IPV4POOL_CIDR - value: "172.168.0.0/16" - - name: IP_AUTODETECTION_METHOD - value: can-reach=DESTINATION - # Disable file logging so `kubectl logs` works. - - name: CALICO_DISABLE_FILE_LOGGING - value: "true" - # Set Felix endpoint to host default action to ACCEPT. - - name: FELIX_DEFAULTENDPOINTTOHOSTACTION - value: "ACCEPT" - # Disable IPv6 on Kubernetes. - - name: FELIX_IPV6SUPPORT - value: "false" - # Set Felix logging to "info" - - name: FELIX_LOGSEVERITYSCREEN - value: "info" - - name: FELIX_HEALTHENABLED - value: "true" - securityContext: - privileged: true - resources: - requests: - cpu: 250m - livenessProbe: - httpGet: - path: /liveness - port: 9099 - host: localhost - periodSeconds: 10 - initialDelaySeconds: 10 - failureThreshold: 6 - readinessProbe: - exec: - command: - - /bin/calico-node - - -bird-ready - - -felix-ready - periodSeconds: 10 - volumeMounts: - - mountPath: /lib/modules - name: lib-modules - readOnly: true - - mountPath: /run/xtables.lock - name: xtables-lock - readOnly: false - - mountPath: /var/run/calico - name: var-run-calico - readOnly: false - - mountPath: /var/lib/calico - name: var-lib-calico - readOnly: false - # This container installs the Calico CNI binaries - # and CNI network config file on each node. - - name: install-cni - image: quay.io/calico/cni:v3.3.2 - command: ["/install-cni.sh"] - env: - # Name of the CNI config file to create. - - name: CNI_CONF_NAME - value: "10-calico.conflist" - # Set the hostname based on the k8s node name. - - name: KUBERNETES_NODE_NAME - valueFrom: - fieldRef: - fieldPath: spec.nodeName - # The CNI network config to install on each node. - - name: CNI_NETWORK_CONFIG - valueFrom: - configMapKeyRef: - name: calico-config - key: cni_network_config - # CNI MTU Config variable - - name: CNI_MTU - valueFrom: - configMapKeyRef: - name: calico-config - key: veth_mtu - volumeMounts: - - mountPath: /host/opt/cni/bin - name: cni-bin-dir - - mountPath: /host/etc/cni/net.d - name: cni-net-dir - volumes: - # Used by calico/node. - - name: lib-modules - hostPath: - path: /lib/modules - - name: var-run-calico - hostPath: - path: /var/run/calico - - name: var-lib-calico - hostPath: - path: /var/lib/calico - - name: xtables-lock - hostPath: - path: /run/xtables.lock - type: FileOrCreate - # Used to install CNI. - - name: cni-bin-dir - hostPath: - path: /opt/cni/bin - - name: cni-net-dir - hostPath: - path: /etc/cni/net.d ---- - -apiVersion: v1 -kind: ServiceAccount -metadata: - name: calico-node - namespace: kube-system - ---- - -# Create all the CustomResourceDefinitions needed for -# Calico policy and networking mode. - -apiVersion: apiextensions.k8s.io/v1beta1 -kind: CustomResourceDefinition -metadata: - name: felixconfigurations.crd.projectcalico.org -spec: - scope: Cluster - group: crd.projectcalico.org - version: v1 - names: - kind: FelixConfiguration - plural: felixconfigurations - singular: felixconfiguration ---- - -apiVersion: apiextensions.k8s.io/v1beta1 -kind: CustomResourceDefinition -metadata: - name: bgppeers.crd.projectcalico.org -spec: - scope: Cluster - group: crd.projectcalico.org - version: v1 - names: - kind: BGPPeer - plural: bgppeers - singular: bgppeer - ---- - -apiVersion: apiextensions.k8s.io/v1beta1 -kind: CustomResourceDefinition -metadata: - name: bgpconfigurations.crd.projectcalico.org -spec: - scope: Cluster - group: crd.projectcalico.org - version: v1 - names: - kind: BGPConfiguration - plural: bgpconfigurations - singular: bgpconfiguration - ---- - -apiVersion: apiextensions.k8s.io/v1beta1 -kind: CustomResourceDefinition -metadata: - name: ippools.crd.projectcalico.org -spec: - scope: Cluster - group: crd.projectcalico.org - version: v1 - names: - kind: IPPool - plural: ippools - singular: ippool - ---- - -apiVersion: apiextensions.k8s.io/v1beta1 -kind: CustomResourceDefinition -metadata: - name: hostendpoints.crd.projectcalico.org -spec: - scope: Cluster - group: crd.projectcalico.org - version: v1 - names: - kind: HostEndpoint - plural: hostendpoints - singular: hostendpoint - ---- - -apiVersion: apiextensions.k8s.io/v1beta1 -kind: CustomResourceDefinition -metadata: - name: clusterinformations.crd.projectcalico.org -spec: - scope: Cluster - group: crd.projectcalico.org - version: v1 - names: - kind: ClusterInformation - plural: clusterinformations - singular: clusterinformation - ---- - -apiVersion: apiextensions.k8s.io/v1beta1 -kind: CustomResourceDefinition -metadata: - name: globalnetworkpolicies.crd.projectcalico.org -spec: - scope: Cluster - group: crd.projectcalico.org - version: v1 - names: - kind: GlobalNetworkPolicy - plural: globalnetworkpolicies - singular: globalnetworkpolicy - ---- - -apiVersion: apiextensions.k8s.io/v1beta1 -kind: CustomResourceDefinition -metadata: - name: globalnetworksets.crd.projectcalico.org -spec: - scope: Cluster - group: crd.projectcalico.org - version: v1 - names: - kind: GlobalNetworkSet - plural: globalnetworksets - singular: globalnetworkset - ---- - -apiVersion: apiextensions.k8s.io/v1beta1 -kind: CustomResourceDefinition -metadata: - name: networkpolicies.crd.projectcalico.org -spec: - scope: Namespaced - group: crd.projectcalico.org - version: v1 - names: - kind: NetworkPolicy - plural: networkpolicies - singular: networkpolicy - diff --git a/calico/upgrade/rbac-kdd.yaml b/calico/upgrade/rbac-kdd.yaml deleted file mode 100644 index 11fa50d..0000000 --- a/calico/upgrade/rbac-kdd.yaml +++ /dev/null @@ -1,92 +0,0 @@ -# Calico Version v3.3.2 -# https://docs.projectcalico.org/v3.3/releases#v3.3.2 -kind: ClusterRole -apiVersion: rbac.authorization.k8s.io/v1beta1 -metadata: - name: calico-node -rules: - - apiGroups: [""] - resources: - - namespaces - - serviceaccounts - verbs: - - get - - list - - watch - - apiGroups: [""] - resources: - - pods/status - verbs: - - patch - - apiGroups: [""] - resources: - - pods - verbs: - - get - - list - - watch - - apiGroups: [""] - resources: - - services - verbs: - - get - - apiGroups: [""] - resources: - - endpoints - verbs: - - get - - apiGroups: [""] - resources: - - nodes - verbs: - - get - - list - - update - - watch - - apiGroups: ["extensions"] - resources: - - networkpolicies - verbs: - - get - - list - - watch - - apiGroups: ["networking.k8s.io"] - resources: - - networkpolicies - verbs: - - watch - - list - - apiGroups: ["crd.projectcalico.org"] - resources: - - globalfelixconfigs - - felixconfigurations - - bgppeers - - globalbgpconfigs - - bgpconfigurations - - ippools - - globalnetworkpolicies - - globalnetworksets - - networkpolicies - - clusterinformations - - hostendpoints - verbs: - - create - - get - - list - - update - - watch - ---- - -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRoleBinding -metadata: - name: calico-node -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: calico-node -subjects: -- kind: ServiceAccount - name: calico-node - namespace: kube-system diff --git a/config/k8s-master01/keepalived/check_apiserver.sh b/config/k8s-master01/keepalived/check_apiserver.sh deleted file mode 100755 index 3ceb7a8..0000000 --- a/config/k8s-master01/keepalived/check_apiserver.sh +++ /dev/null @@ -1,24 +0,0 @@ -#!/bin/bash - -# if check error then repeat check for 12 times, else exit -err=0 -for k in $(seq 1 12) -do - check_code=$(ps -ef | grep kube-apiserver | grep -v color | grep -v grep | wc -l) - if [[ $check_code == "0" ]]; then - err=$(expr $err + 1) - sleep 5 - continue - else - err=0 - break - fi -done - -if [[ $err != "0" ]]; then - echo "systemctl stop keepalived" - /usr/bin/systemctl stop keepalived - exit 1 -else - exit 0 -fi diff --git a/config/k8s-master01/keepalived/keepalived.conf b/config/k8s-master01/keepalived/keepalived.conf deleted file mode 100644 index 4217809..0000000 --- a/config/k8s-master01/keepalived/keepalived.conf +++ /dev/null @@ -1,29 +0,0 @@ -! Configuration File for keepalived -global_defs { - router_id LVS_DEVEL -} -vrrp_script chk_apiserver { - script "/etc/keepalived/check_apiserver.sh" - interval 2 - weight -5 - fall 3 - rise 2 -} -vrrp_instance VI_1 { - state MASTER - interface ens160 - mcast_src_ip 192.168.20.20 - virtual_router_id 51 - priority 102 - advert_int 2 - authentication { - auth_type PASS - auth_pass 412f7dc3bfed32194d1600c483e10ad1d - } - virtual_ipaddress { - 192.168.20.10 - } - track_script { - chk_apiserver - } -} diff --git a/config/k8s-master01/kubeadm-config.yaml b/config/k8s-master01/kubeadm-config.yaml deleted file mode 100644 index e694151..0000000 --- a/config/k8s-master01/kubeadm-config.yaml +++ /dev/null @@ -1,29 +0,0 @@ -apiVersion: kubeadm.k8s.io/v1alpha2 -kind: MasterConfiguration -kubernetesVersion: v1.11.1 -apiServerCertSANs: -- k8s-master01 -- k8s-master02 -- k8s-master03 -- k8s-master-lb -- 192.168.20.20 -- 192.168.20.21 -- 192.168.20.22 -- 192.168.20.10 -etcd: - local: - extraArgs: - listen-client-urls: "https://127.0.0.1:2379,https://192.168.20.20:2379" - advertise-client-urls: "https://192.168.20.20:2379" - listen-peer-urls: "https://192.168.20.20:2380" - initial-advertise-peer-urls: "https://192.168.20.20:2380" - initial-cluster: "k8s-master01=https://192.168.20.20:2380" - serverCertSANs: - - k8s-master01 - - 192.168.20.20 - peerCertSANs: - - k8s-master01 - - 192.168.20.20 -networking: - # This CIDR is a Calico default. Substitute or remove for your CNI provider. - podSubnet: "172.168.0.0/16" diff --git a/config/k8s-master01/nginx-lb/docker-compose.yaml b/config/k8s-master01/nginx-lb/docker-compose.yaml deleted file mode 100644 index 72048d7..0000000 --- a/config/k8s-master01/nginx-lb/docker-compose.yaml +++ /dev/null @@ -1,11 +0,0 @@ -version: '2' -services: - etcd: - image: nginx:latest - container_name: nginx-lb - hostname: nginx-lb - volumes: - - ./nginx-lb.conf:/etc/nginx/nginx.conf - ports: - - 16443:16443 - restart: always diff --git a/config/k8s-master01/nginx-lb/nginx-lb.conf b/config/k8s-master01/nginx-lb/nginx-lb.conf deleted file mode 100644 index e087c2d..0000000 --- a/config/k8s-master01/nginx-lb/nginx-lb.conf +++ /dev/null @@ -1,46 +0,0 @@ -user nginx; -worker_processes 1; - -error_log /var/log/nginx/error.log warn; -pid /var/run/nginx.pid; - - -events { - worker_connections 1024; -} - - -http { - include /etc/nginx/mime.types; - default_type application/octet-stream; - - log_format main '$remote_addr - $remote_user [$time_local] "$request" ' - '$status $body_bytes_sent "$http_referer" ' - '"$http_user_agent" "$http_x_forwarded_for"'; - - access_log /var/log/nginx/access.log main; - - sendfile on; - #tcp_nopush on; - - keepalive_timeout 65; - - #gzip on; - - include /etc/nginx/conf.d/*.conf; -} - -stream { - upstream apiserver { - server 192.168.20.20:6443 weight=5 max_fails=3 fail_timeout=30s; - server 192.168.20.21:6443 weight=5 max_fails=3 fail_timeout=30s; - server 192.168.20.22:6443 weight=5 max_fails=3 fail_timeout=30s; - } - - server { - listen 16443; - proxy_connect_timeout 1s; - proxy_timeout 3s; - proxy_pass apiserver; - } -} diff --git a/config/k8s-master02/keepalived/check_apiserver.sh b/config/k8s-master02/keepalived/check_apiserver.sh deleted file mode 100755 index 3ceb7a8..0000000 --- a/config/k8s-master02/keepalived/check_apiserver.sh +++ /dev/null @@ -1,24 +0,0 @@ -#!/bin/bash - -# if check error then repeat check for 12 times, else exit -err=0 -for k in $(seq 1 12) -do - check_code=$(ps -ef | grep kube-apiserver | grep -v color | grep -v grep | wc -l) - if [[ $check_code == "0" ]]; then - err=$(expr $err + 1) - sleep 5 - continue - else - err=0 - break - fi -done - -if [[ $err != "0" ]]; then - echo "systemctl stop keepalived" - /usr/bin/systemctl stop keepalived - exit 1 -else - exit 0 -fi diff --git a/config/k8s-master02/keepalived/keepalived.conf b/config/k8s-master02/keepalived/keepalived.conf deleted file mode 100644 index e174297..0000000 --- a/config/k8s-master02/keepalived/keepalived.conf +++ /dev/null @@ -1,29 +0,0 @@ -! Configuration File for keepalived -global_defs { - router_id LVS_DEVEL -} -vrrp_script chk_apiserver { - script "/etc/keepalived/check_apiserver.sh" - interval 2 - weight -5 - fall 3 - rise 2 -} -vrrp_instance VI_1 { - state BACKUP - interface ens160 - mcast_src_ip 192.168.20.21 - virtual_router_id 51 - priority 101 - advert_int 2 - authentication { - auth_type PASS - auth_pass 412f7dc3bfed32194d1600c483e10ad1d - } - virtual_ipaddress { - 192.168.20.10 - } - track_script { - chk_apiserver - } -} diff --git a/config/k8s-master02/kubeadm-config.yaml b/config/k8s-master02/kubeadm-config.yaml deleted file mode 100644 index 623e677..0000000 --- a/config/k8s-master02/kubeadm-config.yaml +++ /dev/null @@ -1,30 +0,0 @@ -apiVersion: kubeadm.k8s.io/v1alpha2 -kind: MasterConfiguration -kubernetesVersion: v1.11.1 -apiServerCertSANs: -- k8s-master01 -- k8s-master02 -- k8s-master03 -- k8s-master-lb -- 192.168.20.20 -- 192.168.20.21 -- 192.168.20.22 -- 192.168.20.10 -etcd: - local: - extraArgs: - listen-client-urls: "https://127.0.0.1:2379,https://192.168.20.21:2379" - advertise-client-urls: "https://192.168.20.21:2379" - listen-peer-urls: "https://192.168.20.21:2380" - initial-advertise-peer-urls: "https://192.168.20.21:2380" - initial-cluster: "k8s-master01=https://192.168.20.20:2380,k8s-master02=https://192.168.20.21:2380" - initial-cluster-state: existing - serverCertSANs: - - k8s-master02 - - 192.168.20.21 - peerCertSANs: - - k8s-master02 - - 192.168.20.21 -networking: - # This CIDR is a calico default. Substitute or remove for your CNI provider. - podSubnet: "172.168.0.0/16" diff --git a/config/k8s-master02/nginx-lb/docker-compose.yaml b/config/k8s-master02/nginx-lb/docker-compose.yaml deleted file mode 100644 index 72048d7..0000000 --- a/config/k8s-master02/nginx-lb/docker-compose.yaml +++ /dev/null @@ -1,11 +0,0 @@ -version: '2' -services: - etcd: - image: nginx:latest - container_name: nginx-lb - hostname: nginx-lb - volumes: - - ./nginx-lb.conf:/etc/nginx/nginx.conf - ports: - - 16443:16443 - restart: always diff --git a/config/k8s-master02/nginx-lb/nginx-lb.conf b/config/k8s-master02/nginx-lb/nginx-lb.conf deleted file mode 100644 index e087c2d..0000000 --- a/config/k8s-master02/nginx-lb/nginx-lb.conf +++ /dev/null @@ -1,46 +0,0 @@ -user nginx; -worker_processes 1; - -error_log /var/log/nginx/error.log warn; -pid /var/run/nginx.pid; - - -events { - worker_connections 1024; -} - - -http { - include /etc/nginx/mime.types; - default_type application/octet-stream; - - log_format main '$remote_addr - $remote_user [$time_local] "$request" ' - '$status $body_bytes_sent "$http_referer" ' - '"$http_user_agent" "$http_x_forwarded_for"'; - - access_log /var/log/nginx/access.log main; - - sendfile on; - #tcp_nopush on; - - keepalive_timeout 65; - - #gzip on; - - include /etc/nginx/conf.d/*.conf; -} - -stream { - upstream apiserver { - server 192.168.20.20:6443 weight=5 max_fails=3 fail_timeout=30s; - server 192.168.20.21:6443 weight=5 max_fails=3 fail_timeout=30s; - server 192.168.20.22:6443 weight=5 max_fails=3 fail_timeout=30s; - } - - server { - listen 16443; - proxy_connect_timeout 1s; - proxy_timeout 3s; - proxy_pass apiserver; - } -} diff --git a/config/k8s-master03/keepalived/check_apiserver.sh b/config/k8s-master03/keepalived/check_apiserver.sh deleted file mode 100755 index 3ceb7a8..0000000 --- a/config/k8s-master03/keepalived/check_apiserver.sh +++ /dev/null @@ -1,24 +0,0 @@ -#!/bin/bash - -# if check error then repeat check for 12 times, else exit -err=0 -for k in $(seq 1 12) -do - check_code=$(ps -ef | grep kube-apiserver | grep -v color | grep -v grep | wc -l) - if [[ $check_code == "0" ]]; then - err=$(expr $err + 1) - sleep 5 - continue - else - err=0 - break - fi -done - -if [[ $err != "0" ]]; then - echo "systemctl stop keepalived" - /usr/bin/systemctl stop keepalived - exit 1 -else - exit 0 -fi diff --git a/config/k8s-master03/keepalived/keepalived.conf b/config/k8s-master03/keepalived/keepalived.conf deleted file mode 100644 index e61e3cf..0000000 --- a/config/k8s-master03/keepalived/keepalived.conf +++ /dev/null @@ -1,29 +0,0 @@ -! Configuration File for keepalived -global_defs { - router_id LVS_DEVEL -} -vrrp_script chk_apiserver { - script "/etc/keepalived/check_apiserver.sh" - interval 2 - weight -5 - fall 3 - rise 2 -} -vrrp_instance VI_1 { - state BACKUP - interface ens160 - mcast_src_ip 192.168.20.22 - virtual_router_id 51 - priority 100 - advert_int 2 - authentication { - auth_type PASS - auth_pass 412f7dc3bfed32194d1600c483e10ad1d - } - virtual_ipaddress { - 192.168.20.10 - } - track_script { - chk_apiserver - } -} diff --git a/config/k8s-master03/kubeadm-config.yaml b/config/k8s-master03/kubeadm-config.yaml deleted file mode 100644 index 8917c14..0000000 --- a/config/k8s-master03/kubeadm-config.yaml +++ /dev/null @@ -1,30 +0,0 @@ -apiVersion: kubeadm.k8s.io/v1alpha2 -kind: MasterConfiguration -kubernetesVersion: v1.11.1 -apiServerCertSANs: -- k8s-master01 -- k8s-master02 -- k8s-master03 -- k8s-master-lb -- 192.168.20.20 -- 192.168.20.21 -- 192.168.20.22 -- 192.168.20.10 -etcd: - local: - extraArgs: - listen-client-urls: "https://127.0.0.1:2379,https://192.168.20.22:2379" - advertise-client-urls: "https://192.168.20.22:2379" - listen-peer-urls: "https://192.168.20.22:2380" - initial-advertise-peer-urls: "https://192.168.20.22:2380" - initial-cluster: "k8s-master01=https://192.168.20.20:2380,k8s-master02=https://192.168.20.21:2380,k8s-master03=https://192.168.20.22:2380" - initial-cluster-state: existing - serverCertSANs: - - k8s-master03 - - 192.168.20.22 - peerCertSANs: - - k8s-master03 - - 192.168.20.22 -networking: - # This CIDR is a calico default. Substitute or remove for your CNI provider. - podSubnet: "172.168.0.0/16" diff --git a/config/k8s-master03/nginx-lb/docker-compose.yaml b/config/k8s-master03/nginx-lb/docker-compose.yaml deleted file mode 100644 index 72048d7..0000000 --- a/config/k8s-master03/nginx-lb/docker-compose.yaml +++ /dev/null @@ -1,11 +0,0 @@ -version: '2' -services: - etcd: - image: nginx:latest - container_name: nginx-lb - hostname: nginx-lb - volumes: - - ./nginx-lb.conf:/etc/nginx/nginx.conf - ports: - - 16443:16443 - restart: always diff --git a/config/k8s-master03/nginx-lb/nginx-lb.conf b/config/k8s-master03/nginx-lb/nginx-lb.conf deleted file mode 100644 index e087c2d..0000000 --- a/config/k8s-master03/nginx-lb/nginx-lb.conf +++ /dev/null @@ -1,46 +0,0 @@ -user nginx; -worker_processes 1; - -error_log /var/log/nginx/error.log warn; -pid /var/run/nginx.pid; - - -events { - worker_connections 1024; -} - - -http { - include /etc/nginx/mime.types; - default_type application/octet-stream; - - log_format main '$remote_addr - $remote_user [$time_local] "$request" ' - '$status $body_bytes_sent "$http_referer" ' - '"$http_user_agent" "$http_x_forwarded_for"'; - - access_log /var/log/nginx/access.log main; - - sendfile on; - #tcp_nopush on; - - keepalive_timeout 65; - - #gzip on; - - include /etc/nginx/conf.d/*.conf; -} - -stream { - upstream apiserver { - server 192.168.20.20:6443 weight=5 max_fails=3 fail_timeout=30s; - server 192.168.20.21:6443 weight=5 max_fails=3 fail_timeout=30s; - server 192.168.20.22:6443 weight=5 max_fails=3 fail_timeout=30s; - } - - server { - listen 16443; - proxy_connect_timeout 1s; - proxy_timeout 3s; - proxy_pass apiserver; - } -} diff --git a/create-config.sh b/create-config.sh deleted file mode 100755 index 276b47a..0000000 --- a/create-config.sh +++ /dev/null @@ -1,231 +0,0 @@ -#!/bin/bash - -####################################### -# set variables below to create the config files, all files will create at ./config directory -####################################### - -# master keepalived virtual ip address -export K8SHA_VIP=192.168.20.10 - -# master01 ip address -export K8SHA_IP1=192.168.20.20 - -# master02 ip address -export K8SHA_IP2=192.168.20.21 - -# master03 ip address -export K8SHA_IP3=192.168.20.22 - -# master keepalived virtual ip hostname -export K8SHA_VHOST=k8s-master-lb - -# master01 hostname -export K8SHA_HOST1=k8s-master01 - -# master02 hostname -export K8SHA_HOST2=k8s-master02 - -# master03 hostname -export K8SHA_HOST3=k8s-master03 - -# master01 network interface name -export K8SHA_NETINF1=ens160 - -# master02 network interface name -export K8SHA_NETINF2=ens160 - -# master03 network interface name -export K8SHA_NETINF3=ens160 - -# keepalived auth_pass config -export K8SHA_KEEPALIVED_AUTH=412f7dc3bfed32194d1600c483e10ad1d - -# calico reachable ip address -export K8SHA_CALICO_REACHABLE_IP=192.168.0.1 - -# kubernetes CIDR pod subnet, if CIDR pod subnet is "172.168.0.0/16" please set to "172.168.0.0" -export K8SHA_CIDR=172.168.0.0 - -############################## -# please do not modify anything below -############################## - -mkdir -p config/$K8SHA_HOST1/{keepalived,nginx-lb} -mkdir -p config/$K8SHA_HOST2/{keepalived,nginx-lb} -mkdir -p config/$K8SHA_HOST3/{keepalived,nginx-lb} - -# create all kubeadm-config.yaml files - -cat << EOF > config/$K8SHA_HOST1/kubeadm-config.yaml -apiVersion: kubeadm.k8s.io/v1alpha2 -kind: MasterConfiguration -kubernetesVersion: v1.11.1 -apiServerCertSANs: -- ${K8SHA_HOST1} -- ${K8SHA_HOST2} -- ${K8SHA_HOST3} -- ${K8SHA_VHOST} -- ${K8SHA_IP1} -- ${K8SHA_IP2} -- ${K8SHA_IP3} -- ${K8SHA_VIP} -etcd: - local: - extraArgs: - listen-client-urls: "https://127.0.0.1:2379,https://${K8SHA_IP1}:2379" - advertise-client-urls: "https://${K8SHA_IP1}:2379" - listen-peer-urls: "https://${K8SHA_IP1}:2380" - initial-advertise-peer-urls: "https://${K8SHA_IP1}:2380" - initial-cluster: "${K8SHA_HOST1}=https://${K8SHA_IP1}:2380" - serverCertSANs: - - ${K8SHA_HOST1} - - ${K8SHA_IP1} - peerCertSANs: - - ${K8SHA_HOST1} - - ${K8SHA_IP1} -networking: - # This CIDR is a Calico default. Substitute or remove for your CNI provider. - podSubnet: "${K8SHA_CIDR}/16" -EOF - -cat << EOF > config/$K8SHA_HOST2/kubeadm-config.yaml -apiVersion: kubeadm.k8s.io/v1alpha2 -kind: MasterConfiguration -kubernetesVersion: v1.11.1 -apiServerCertSANs: -- ${K8SHA_HOST1} -- ${K8SHA_HOST2} -- ${K8SHA_HOST3} -- ${K8SHA_VHOST} -- ${K8SHA_IP1} -- ${K8SHA_IP2} -- ${K8SHA_IP3} -- ${K8SHA_VIP} -etcd: - local: - extraArgs: - listen-client-urls: "https://127.0.0.1:2379,https://${K8SHA_IP2}:2379" - advertise-client-urls: "https://${K8SHA_IP2}:2379" - listen-peer-urls: "https://${K8SHA_IP2}:2380" - initial-advertise-peer-urls: "https://${K8SHA_IP2}:2380" - initial-cluster: "${K8SHA_HOST1}=https://${K8SHA_IP1}:2380,${K8SHA_HOST2}=https://${K8SHA_IP2}:2380" - initial-cluster-state: existing - serverCertSANs: - - ${K8SHA_HOST2} - - ${K8SHA_IP2} - peerCertSANs: - - ${K8SHA_HOST2} - - ${K8SHA_IP2} -networking: - # This CIDR is a calico default. Substitute or remove for your CNI provider. - podSubnet: "${K8SHA_CIDR}/16" -EOF - -cat << EOF > config/$K8SHA_HOST3/kubeadm-config.yaml -apiVersion: kubeadm.k8s.io/v1alpha2 -kind: MasterConfiguration -kubernetesVersion: v1.11.1 -apiServerCertSANs: -- ${K8SHA_HOST1} -- ${K8SHA_HOST2} -- ${K8SHA_HOST3} -- ${K8SHA_VHOST} -- ${K8SHA_IP1} -- ${K8SHA_IP2} -- ${K8SHA_IP3} -- ${K8SHA_VIP} -etcd: - local: - extraArgs: - listen-client-urls: "https://127.0.0.1:2379,https://${K8SHA_IP3}:2379" - advertise-client-urls: "https://${K8SHA_IP3}:2379" - listen-peer-urls: "https://${K8SHA_IP3}:2380" - initial-advertise-peer-urls: "https://${K8SHA_IP3}:2380" - initial-cluster: "${K8SHA_HOST1}=https://${K8SHA_IP1}:2380,${K8SHA_HOST2}=https://${K8SHA_IP2}:2380,${K8SHA_HOST3}=https://${K8SHA_IP3}:2380" - initial-cluster-state: existing - serverCertSANs: - - ${K8SHA_HOST3} - - ${K8SHA_IP3} - peerCertSANs: - - ${K8SHA_HOST3} - - ${K8SHA_IP3} -networking: - # This CIDR is a calico default. Substitute or remove for your CNI provider. - podSubnet: "${K8SHA_CIDR}/16" -EOF - -echo "create kubeadm-config.yaml files success. config/$K8SHA_HOST1/kubeadm-config.yaml" -echo "create kubeadm-config.yaml files success. config/$K8SHA_HOST2/kubeadm-config.yaml" -echo "create kubeadm-config.yaml files success. config/$K8SHA_HOST3/kubeadm-config.yaml" - -# create all keepalived files -cp keepalived/check_apiserver.sh config/$K8SHA_HOST1/keepalived -cp keepalived/check_apiserver.sh config/$K8SHA_HOST2/keepalived -cp keepalived/check_apiserver.sh config/$K8SHA_HOST3/keepalived - -sed \ --e "s/K8SHA_KA_STATE/MASTER/g" \ --e "s/K8SHA_KA_INTF/${K8SHA_NETINF1}/g" \ --e "s/K8SHA_IPLOCAL/${K8SHA_IP1}/g" \ --e "s/K8SHA_KA_PRIO/102/g" \ --e "s/K8SHA_VIP/${K8SHA_VIP}/g" \ --e "s/K8SHA_KA_AUTH/${K8SHA_KEEPALIVED_AUTH}/g" \ -keepalived/keepalived.conf.tpl > config/$K8SHA_HOST1/keepalived/keepalived.conf - -sed \ --e "s/K8SHA_KA_STATE/BACKUP/g" \ --e "s/K8SHA_KA_INTF/${K8SHA_NETINF2}/g" \ --e "s/K8SHA_IPLOCAL/${K8SHA_IP2}/g" \ --e "s/K8SHA_KA_PRIO/101/g" \ --e "s/K8SHA_VIP/${K8SHA_VIP}/g" \ --e "s/K8SHA_KA_AUTH/${K8SHA_KEEPALIVED_AUTH}/g" \ -keepalived/keepalived.conf.tpl > config/$K8SHA_HOST2/keepalived/keepalived.conf - -sed \ --e "s/K8SHA_KA_STATE/BACKUP/g" \ --e "s/K8SHA_KA_INTF/${K8SHA_NETINF3}/g" \ --e "s/K8SHA_IPLOCAL/${K8SHA_IP3}/g" \ --e "s/K8SHA_KA_PRIO/100/g" \ --e "s/K8SHA_VIP/${K8SHA_VIP}/g" \ --e "s/K8SHA_KA_AUTH/${K8SHA_KEEPALIVED_AUTH}/g" \ -keepalived/keepalived.conf.tpl > config/$K8SHA_HOST3/keepalived/keepalived.conf - -echo "create keepalived files success. config/$K8SHA_HOST1/keepalived/" -echo "create keepalived files success. config/$K8SHA_HOST2/keepalived/" -echo "create keepalived files success. config/$K8SHA_HOST3/keepalived/" - -# create all nginx-lb files - -cp nginx-lb/docker-compose.yaml config/$K8SHA_HOST1/nginx-lb/ -cp nginx-lb/docker-compose.yaml config/$K8SHA_HOST2/nginx-lb/ -cp nginx-lb/docker-compose.yaml config/$K8SHA_HOST3/nginx-lb/ - -sed \ --e "s/K8SHA_IP1/$K8SHA_IP1/g" \ --e "s/K8SHA_IP2/$K8SHA_IP2/g" \ --e "s/K8SHA_IP3/$K8SHA_IP3/g" \ -nginx-lb/nginx-lb.conf.tpl > config/$K8SHA_HOST1/nginx-lb/nginx-lb.conf - -sed \ --e "s/K8SHA_IP1/$K8SHA_IP1/g" \ --e "s/K8SHA_IP2/$K8SHA_IP2/g" \ --e "s/K8SHA_IP3/$K8SHA_IP3/g" \ -nginx-lb/nginx-lb.conf.tpl > config/$K8SHA_HOST2/nginx-lb/nginx-lb.conf - -sed \ --e "s/K8SHA_IP1/$K8SHA_IP1/g" \ --e "s/K8SHA_IP2/$K8SHA_IP2/g" \ --e "s/K8SHA_IP3/$K8SHA_IP3/g" \ -nginx-lb/nginx-lb.conf.tpl > config/$K8SHA_HOST3/nginx-lb/nginx-lb.conf - -echo "create nginx-lb files success. config/$K8SHA_HOST1/nginx-lb/" -echo "create nginx-lb files success. config/$K8SHA_HOST2/nginx-lb/" -echo "create nginx-lb files success. config/$K8SHA_HOST3/nginx-lb/" - -# create calico yaml file -sed \ --e "s/K8SHA_CALICO_REACHABLE_IP/${K8SHA_CALICO_REACHABLE_IP}/g" \ --e "s/K8SHA_CIDR/${K8SHA_CIDR}/g" \ -calico/calico.yaml.tpl > calico/calico.yaml - -echo "create calico.yaml file success. calico/calico.yaml" diff --git a/dashboard/kubernetes-dashboard.yaml b/dashboard/kubernetes-dashboard.yaml deleted file mode 100644 index 9094412..0000000 --- a/dashboard/kubernetes-dashboard.yaml +++ /dev/null @@ -1,192 +0,0 @@ -# Copyright 2017 The Kubernetes Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# Configuration to deploy release version of the Dashboard UI compatible with -# Kubernetes 1.8. -# -# Example usage: kubectl create -f - -# ------------------- Dashboard Secret ------------------- # - -apiVersion: v1 -kind: Secret -metadata: - labels: - k8s-app: kubernetes-dashboard - name: kubernetes-dashboard-certs - namespace: kube-system -type: Opaque - ---- -# ------------------- Dashboard Service Account ------------------- # - -apiVersion: v1 -kind: ServiceAccount -metadata: - labels: - k8s-app: kubernetes-dashboard - name: kubernetes-dashboard - namespace: kube-system - ---- -# ------------------- Dashboard Role & Role Binding ------------------- # - -kind: Role -apiVersion: rbac.authorization.k8s.io/v1 -metadata: - name: kubernetes-dashboard-minimal - namespace: kube-system -rules: - # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret. -- apiGroups: [""] - resources: ["secrets"] - verbs: ["create"] - # Allow Dashboard to create 'kubernetes-dashboard-settings' config map. -- apiGroups: [""] - resources: ["configmaps"] - verbs: ["create"] - # Allow Dashboard to get, update and delete Dashboard exclusive secrets. -- apiGroups: [""] - resources: ["secrets"] - resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"] - verbs: ["get", "update", "delete"] - # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map. -- apiGroups: [""] - resources: ["configmaps"] - resourceNames: ["kubernetes-dashboard-settings"] - verbs: ["get", "update"] - # Allow Dashboard to get metrics from heapster. -- apiGroups: [""] - resources: ["services"] - resourceNames: ["heapster"] - verbs: ["proxy"] -- apiGroups: [""] - resources: ["services/proxy"] - resourceNames: ["heapster", "http:heapster:", "https:heapster:"] - verbs: ["get"] - ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: RoleBinding -metadata: - name: kubernetes-dashboard-minimal - namespace: kube-system -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: Role - name: kubernetes-dashboard-minimal -subjects: -- kind: ServiceAccount - name: kubernetes-dashboard - namespace: kube-system - ---- -# ------------------- Dashboard Deployment ------------------- # - -kind: Deployment -apiVersion: apps/v1beta2 -metadata: - labels: - k8s-app: kubernetes-dashboard - name: kubernetes-dashboard - namespace: kube-system -spec: - replicas: 1 - revisionHistoryLimit: 10 - selector: - matchLabels: - k8s-app: kubernetes-dashboard - template: - metadata: - labels: - k8s-app: kubernetes-dashboard - spec: - nodeSelector: - node-role.kubernetes.io/master: "" - containers: - - name: kubernetes-dashboard - image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.8.3 - ports: - - containerPort: 8443 - protocol: TCP - args: - - --auto-generate-certificates - # Uncomment the following line to manually specify Kubernetes API server Host - # If not specified, Dashboard will attempt to auto discover the API server and connect - # to it. Uncomment only if the default does not work. - # - --apiserver-host=http://my-address:port - volumeMounts: - - name: kubernetes-dashboard-certs - mountPath: /certs - # Create on-disk volume to store exec logs - - mountPath: /tmp - name: tmp-volume - livenessProbe: - httpGet: - scheme: HTTPS - path: / - port: 8443 - initialDelaySeconds: 30 - timeoutSeconds: 30 - volumes: - - name: kubernetes-dashboard-certs - secret: - secretName: kubernetes-dashboard-certs - - name: tmp-volume - emptyDir: {} - serviceAccountName: kubernetes-dashboard - # Comment the following tolerations if Dashboard must not be deployed on master - tolerations: - - key: node-role.kubernetes.io/master - effect: NoSchedule - ---- -# ------------------- Dashboard Service ------------------- # - -kind: Service -apiVersion: v1 -metadata: - labels: - k8s-app: kubernetes-dashboard - name: kubernetes-dashboard - namespace: kube-system -spec: - type: NodePort - ports: - - port: 443 - targetPort: 8443 - nodePort: 30000 - selector: - k8s-app: kubernetes-dashboard - ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: admin-user - namespace: kube-system - ---- -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRoleBinding -metadata: - name: admin-user -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: cluster-admin -subjects: -- kind: ServiceAccount - name: admin-user - namespace: kube-system diff --git a/heapster/grafana-dashboard/kubernetes-apps_rev1.json b/heapster/grafana-dashboard/kubernetes-apps_rev1.json deleted file mode 100644 index 8d622b7..0000000 --- a/heapster/grafana-dashboard/kubernetes-apps_rev1.json +++ /dev/null @@ -1,1400 +0,0 @@ -{ - "__inputs": [ - { - "name": "DS_PROMETHEUS", - "label": "prometheus", - "description": "", - "type": "datasource", - "pluginId": "prometheus", - "pluginName": "Prometheus" - } - ], - "__requires": [ - { - "type": "grafana", - "id": "grafana", - "name": "Grafana", - "version": "4.1.1" - }, - { - "type": "panel", - "id": "graph", - "name": "Graph", - "version": "" - }, - { - "type": "datasource", - "id": "prometheus", - "name": "Prometheus", - "version": "1.0.0" - } - ], - "annotations": { - "list": [] - }, - "editable": true, - "gnetId": 1471, - "graphTooltip": 1, - "hideControls": false, - "id": null, - "links": [], - "refresh": "30s", - "rows": [ - { - "collapse": false, - "height": "250px", - "panels": [ - { - "aliasColors": {}, - "bars": false, - "datasource": "${DS_PROMETHEUS}", - "editable": true, - "error": false, - "fill": 1, - "grid": {}, - "id": 3, - "legend": { - "alignAsTable": true, - "avg": true, - "current": false, - "max": true, - "min": false, - "rightSide": true, - "show": true, - "sort": "avg", - "sortDesc": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 2, - "links": [], - "nullPointMode": "connected", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "span": 6, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "sum(irate(http_requests_total{app=\"$container\", handler!=\"prometheus\", kubernetes_namespace=\"$namespace\"}[30s])) by (kubernetes_namespace,app,code)", - "interval": "", - "intervalFactor": 1, - "legendFormat": "native | {{code}}", - "refId": "A", - "step": 10 - }, - { - "expr": "sum(irate(nginx_http_requests_total{app=\"$container\", kubernetes_namespace=\"$namespace\"}[30s])) by (kubernetes_namespace,app,status)", - "interval": "", - "intervalFactor": 1, - "legendFormat": "nginx | {{status}}", - "refId": "B", - "step": 10 - }, - { - "expr": "sum(irate(haproxy_backend_http_responses_total{app=\"$container\", kubernetes_namespace=\"$namespace\"}[30s])) by (app,kubernetes_namespace,code)", - "interval": "", - "intervalFactor": 1, - "legendFormat": "haproxy | {{code}}", - "refId": "C", - "step": 10 - } - ], - "thresholds": [], - "timeFrom": null, - "timeShift": null, - "title": "Request rate", - "tooltip": { - "msResolution": true, - "shared": false, - "sort": 0, - "value_type": "cumulative" - }, - "type": "graph", - "xaxis": { - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "ops", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ] - }, - { - "aliasColors": {}, - "bars": false, - "datasource": "${DS_PROMETHEUS}", - "fill": 1, - "id": 15, - "legend": { - "alignAsTable": true, - "avg": true, - "current": false, - "max": true, - "min": false, - "rightSide": true, - "show": true, - "sort": "avg", - "sortDesc": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 1, - "links": [], - "nullPointMode": "null", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "span": 6, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "sum(irate(haproxy_backend_http_responses_total{app=\"$container\", kubernetes_namespace=\"$namespace\",code=\"5xx\"}[30s])) by (app,kubernetes_namespace) / sum(irate(haproxy_backend_http_responses_total{app=\"$container\", kubernetes_namespace=\"$namespace\"}[30s])) by (app,kubernetes_namespace)", - "interval": "", - "intervalFactor": 2, - "legendFormat": "haproxy", - "refId": "A", - "step": 20 - }, - { - "expr": "sum(irate(http_requests_total{app=\"$container\", handler!=\"prometheus\", kubernetes_namespace=\"$namespace\", code=~\"5[0-9]+\"}[30s])) by (kubernetes_namespace,app) / sum(irate(http_requests_total{app=\"$container\", handler!=\"prometheus\", kubernetes_namespace=\"$namespace\"}[30s])) by (kubernetes_namespace,app)", - "intervalFactor": 2, - "legendFormat": "native", - "refId": "B", - "step": 20 - }, - { - "expr": "sum(irate(nginx_http_requests_total{app=\"$container\", kubernetes_namespace=\"$namespace\", status=~\"5[0-9]+\"}[30s])) by (kubernetes_namespace,app) / sum(irate(nginx_http_requests_total{app=\"$container\", kubernetes_namespace=\"$namespace\"}[30s])) by (kubernetes_namespace,app)", - "intervalFactor": 2, - "legendFormat": "nginx", - "refId": "C", - "step": 20 - } - ], - "thresholds": [], - "timeFrom": null, - "timeShift": null, - "title": "Error rate", - "tooltip": { - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "percentunit", - "label": null, - "logBase": 1, - "max": null, - "min": "0", - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ] - } - ], - "repeat": null, - "repeatIteration": null, - "repeatRowId": null, - "showTitle": false, - "title": "Request rate", - "titleSize": "h6" - }, - { - "collapse": true, - "height": 224, - "panels": [ - { - "aliasColors": {}, - "bars": false, - "datasource": "${DS_PROMETHEUS}", - "editable": true, - "error": false, - "fill": 1, - "grid": {}, - "id": 5, - "legend": { - "alignAsTable": true, - "avg": true, - "current": false, - "max": true, - "min": false, - "rightSide": true, - "show": true, - "sort": "max", - "sortDesc": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 2, - "links": [], - "nullPointMode": "connected", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "span": 12, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "histogram_quantile(0.99, sum(rate(http_request_duration_seconds_bucket{app=\"$container\", kubernetes_namespace=\"$namespace\"}[30s])) by (app,kubernetes_namespace,le))", - "intervalFactor": 1, - "legendFormat": "native | 0.99", - "refId": "A", - "step": 1 - }, - { - "expr": "histogram_quantile(0.90, sum(rate(http_request_duration_seconds_bucket{app=\"$container\", kubernetes_namespace=\"$namespace\"}[30s])) by (app,kubernetes_namespace,le))", - "intervalFactor": 1, - "legendFormat": "native | 0.90", - "refId": "B", - "step": 1 - }, - { - "expr": "histogram_quantile(0.5, sum(rate(http_request_duration_seconds_bucket{app=\"$container\", kubernetes_namespace=\"$namespace\"}[30s])) by (app,kubernetes_namespace,le))", - "interval": "", - "intervalFactor": 1, - "legendFormat": "native | 0.50", - "refId": "C", - "step": 1 - }, - { - "expr": "histogram_quantile(0.99, sum(rate(nginx_http_request_duration_seconds_bucket{app=\"$container\", kubernetes_namespace=\"$namespace\"}[30s])) by (app,kubernetes_namespace,le))", - "intervalFactor": 1, - "legendFormat": "nginx | 0.99", - "refId": "D", - "step": 1 - }, - { - "expr": "histogram_quantile(0.9, sum(rate(nginx_http_request_duration_seconds_bucket{app=\"$container\", kubernetes_namespace=\"$namespace\"}[30s])) by (app,kubernetes_namespace,le))", - "intervalFactor": 1, - "legendFormat": "nginx | 0.90", - "refId": "E", - "step": 1 - }, - { - "expr": "histogram_quantile(0.5, sum(rate(nginx_http_request_duration_seconds_bucket{app=\"$container\", kubernetes_namespace=\"$namespace\"}[30s])) by (app,kubernetes_namespace,le))", - "intervalFactor": 1, - "legendFormat": "nginx | 0.50", - "refId": "F", - "step": 1 - } - ], - "thresholds": [], - "timeFrom": null, - "timeShift": null, - "title": "Response time percentiles", - "tooltip": { - "msResolution": true, - "shared": true, - "sort": 0, - "value_type": "cumulative" - }, - "type": "graph", - "xaxis": { - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "s", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ] - } - ], - "repeat": null, - "repeatIteration": null, - "repeatRowId": null, - "showTitle": false, - "title": "Response time", - "titleSize": "h6" - }, - { - "collapse": false, - "height": 250, - "panels": [ - { - "aliasColors": {}, - "bars": false, - "datasource": "${DS_PROMETHEUS}", - "editable": true, - "error": false, - "fill": 1, - "id": 7, - "legend": { - "alignAsTable": true, - "avg": true, - "current": false, - "max": true, - "min": false, - "rightSide": true, - "show": true, - "sort": "avg", - "sortDesc": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 1, - "links": [], - "nullPointMode": "connected", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "span": 12, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "count(count(container_memory_usage_bytes{container_name=\"$container\", namespace=\"$namespace\"}) by (pod_name))", - "interval": "", - "intervalFactor": 1, - "legendFormat": "pods", - "refId": "A", - "step": 5 - }, - { - "expr": "count(count(container_memory_usage_bytes{container_name=\"$container\", namespace=\"$namespace\"}) by (kubernetes_io_hostname))", - "interval": "", - "intervalFactor": 2, - "legendFormat": "hosts", - "refId": "B", - "step": 10 - } - ], - "thresholds": [], - "timeFrom": null, - "timeShift": null, - "title": "Number of pods", - "tooltip": { - "msResolution": false, - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": "0", - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ] - } - ], - "repeat": null, - "repeatIteration": null, - "repeatRowId": null, - "showTitle": false, - "title": "Pod count", - "titleSize": "h6" - }, - { - "collapse": false, - "height": 250, - "panels": [ - { - "aliasColors": {}, - "bars": false, - "datasource": "${DS_PROMETHEUS}", - "editable": true, - "error": false, - "fill": 1, - "id": 12, - "legend": { - "alignAsTable": true, - "avg": true, - "current": false, - "max": true, - "min": false, - "rightSide": true, - "show": true, - "sort": "avg", - "sortDesc": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 1, - "links": [], - "nullPointMode": "connected", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [ - { - "alias": "elasticsearch-logging-data-20170207a (logging) - system", - "color": "#BF1B00" - }, - { - "alias": "elasticsearch-logging-data-20170207a (logging) - user", - "color": "#508642" - } - ], - "span": 12, - "stack": true, - "steppedLine": false, - "targets": [ - { - "expr": "sum(irate(container_cpu_system_seconds_total{container_name=\"$container\", namespace=\"$namespace\"}[30s])) by (namespace,container_name) / sum(container_spec_cpu_shares{container_name=\"$container\", namespace=\"$namespace\"} / 1024) by (namespace,container_name)", - "intervalFactor": 2, - "legendFormat": "system", - "refId": "C", - "step": 10 - }, - { - "expr": "sum(irate(container_cpu_user_seconds_total{container_name=\"$container\", namespace=\"$namespace\"}[30s])) by (namespace,container_name) / sum(container_spec_cpu_shares{container_name=\"$container\", namespace=\"$namespace\"} / 1024) by (namespace,container_name)", - "interval": "", - "intervalFactor": 2, - "legendFormat": "user", - "refId": "B", - "step": 10 - } - ], - "thresholds": [], - "timeFrom": null, - "timeShift": null, - "title": "Cpu usage (relative to request)", - "tooltip": { - "msResolution": false, - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "percentunit", - "label": "", - "logBase": 1, - "max": "1", - "min": "0", - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ] - } - ], - "repeat": null, - "repeatIteration": null, - "repeatRowId": null, - "showTitle": false, - "title": "Usage relative to request", - "titleSize": "h6" - }, - { - "collapse": true, - "height": 250, - "panels": [ - { - "aliasColors": {}, - "bars": false, - "datasource": "${DS_PROMETHEUS}", - "editable": true, - "error": false, - "fill": 1, - "id": 10, - "legend": { - "alignAsTable": true, - "avg": true, - "current": false, - "max": true, - "min": false, - "rightSide": true, - "show": true, - "sort": "avg", - "sortDesc": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 1, - "links": [], - "nullPointMode": "connected", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "span": 6, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "sum(irate(container_cpu_usage_seconds_total{container_name=\"$container\", namespace=\"$namespace\"}[30s])) by (namespace,container_name) / sum(container_spec_cpu_quota{container_name=\"$container\", namespace=\"$namespace\"} / container_spec_cpu_period{container_name=\"$container\", namespace=\"$namespace\"}) by (namespace,container_name)", - "interval": "", - "intervalFactor": 1, - "legendFormat": "actual", - "metric": "", - "refId": "A", - "step": 1 - } - ], - "thresholds": [], - "timeFrom": null, - "timeShift": null, - "title": "Cpu usage (relative to limit)", - "tooltip": { - "msResolution": false, - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "percentunit", - "label": "", - "logBase": 1, - "max": "1", - "min": "0", - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ] - }, - { - "aliasColors": {}, - "bars": false, - "datasource": "${DS_PROMETHEUS}", - "editable": true, - "error": false, - "fill": 1, - "id": 11, - "legend": { - "alignAsTable": true, - "avg": true, - "current": false, - "max": true, - "min": false, - "rightSide": true, - "show": true, - "sort": "avg", - "sortDesc": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 1, - "links": [], - "nullPointMode": "connected", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "span": 6, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "sum(container_memory_usage_bytes{container_name=\"$container\", namespace=\"$namespace\"}) by (namespace,container_name) / sum(container_spec_memory_limit_bytes{container_name=\"$container\", namespace=\"$namespace\"}) by (namespace,container_name)", - "interval": "", - "intervalFactor": 1, - "legendFormat": "actual", - "refId": "A", - "step": 1 - } - ], - "thresholds": [], - "timeFrom": null, - "timeShift": null, - "title": "Memory usage (relative to limit)", - "tooltip": { - "msResolution": false, - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "percentunit", - "label": null, - "logBase": 1, - "max": "1", - "min": "0", - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ] - } - ], - "repeat": null, - "repeatIteration": null, - "repeatRowId": null, - "showTitle": false, - "title": "Usage relative to limit", - "titleSize": "h6" - }, - { - "collapse": true, - "height": 250, - "panels": [ - { - "aliasColors": {}, - "bars": false, - "datasource": "${DS_PROMETHEUS}", - "fill": 1, - "id": 13, - "legend": { - "alignAsTable": true, - "avg": true, - "current": false, - "max": true, - "min": false, - "rightSide": true, - "show": true, - "sort": "avg", - "sortDesc": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 1, - "links": [], - "nullPointMode": "null", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "span": 6, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "sum(irate(container_cpu_usage_seconds_total{container_name=\"$container\", namespace=\"$namespace\"}[30s])) by (id,pod_name)", - "interval": "", - "intervalFactor": 2, - "legendFormat": "{{pod_name}}", - "refId": "A", - "step": 2 - }, - { - "expr": "sum(container_spec_cpu_quota{container_name=\"$container\", namespace=\"$namespace\"} / container_spec_cpu_period{container_name=\"$container\", namespace=\"$namespace\"}) by (namespace,container_name) / count(container_memory_usage_bytes{container_name=\"$container\", namespace=\"$namespace\"}) by (namespace,container_name) ", - "intervalFactor": 2, - "legendFormat": "limit", - "refId": "B", - "step": 2 - }, - { - "expr": "sum(container_spec_cpu_shares{container_name=\"$container\", namespace=\"$namespace\"} / 1024) by (namespace,container_name) / count(container_spec_cpu_shares{container_name=\"$container\", namespace=\"$namespace\"}) by (namespace,container_name) ", - "intervalFactor": 2, - "legendFormat": "request", - "refId": "C", - "step": 2 - } - ], - "thresholds": [], - "timeFrom": null, - "timeShift": null, - "title": "Cpu usage (per pod)", - "tooltip": { - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "short", - "label": "cores", - "logBase": 1, - "max": null, - "min": "0", - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ] - }, - { - "aliasColors": {}, - "bars": false, - "datasource": "${DS_PROMETHEUS}", - "fill": 1, - "id": 14, - "legend": { - "alignAsTable": true, - "avg": true, - "current": false, - "max": true, - "min": false, - "rightSide": true, - "show": true, - "sort": "avg", - "sortDesc": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 1, - "links": [], - "nullPointMode": "null", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "span": 6, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "sum(container_memory_usage_bytes{container_name=\"$container\", namespace=\"$namespace\"}) by (id,pod_name)", - "interval": "", - "intervalFactor": 2, - "legendFormat": "{{pod_name}}", - "refId": "A", - "step": 2 - }, - { - "expr": "sum(container_spec_memory_limit_bytes{container_name=\"$container\", namespace=\"$namespace\"}) by (namespace,container_name) / count(container_memory_usage_bytes{container_name=\"$container\", namespace=\"$namespace\"}) by (container_name,namespace)", - "intervalFactor": 2, - "legendFormat": "limit", - "refId": "B", - "step": 2 - } - ], - "thresholds": [], - "timeFrom": null, - "timeShift": null, - "title": "Memory usage (per pod)", - "tooltip": { - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "bytes", - "label": null, - "logBase": 1, - "max": null, - "min": "0", - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ] - } - ], - "repeat": null, - "repeatIteration": null, - "repeatRowId": null, - "showTitle": false, - "title": "Usage per pod", - "titleSize": "h6" - }, - { - "collapse": true, - "height": 250, - "panels": [ - { - "aliasColors": {}, - "bars": false, - "datasource": "${DS_PROMETHEUS}", - "editable": true, - "error": false, - "fill": 1, - "id": 8, - "legend": { - "alignAsTable": true, - "avg": true, - "current": false, - "max": true, - "min": false, - "rightSide": true, - "show": true, - "sort": "avg", - "sortDesc": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 1, - "links": [], - "nullPointMode": "connected", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "span": 6, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "sum(irate(container_cpu_usage_seconds_total{container_name=\"$container\", namespace=\"$namespace\"}[30s])) by (namespace,container_name) / count(container_memory_usage_bytes{container_name=\"$container\", namespace=\"$namespace\"}) by (namespace,container_name) ", - "interval": "", - "intervalFactor": 1, - "legendFormat": "actual", - "refId": "A", - "step": 1 - }, - { - "expr": "sum(container_spec_cpu_quota{container_name=\"$container\", namespace=\"$namespace\"} / container_spec_cpu_period{container_name=\"$container\", namespace=\"$namespace\"}) by (namespace,container_name) / count(container_memory_usage_bytes{container_name=\"$container\", namespace=\"$namespace\"}) by (namespace,container_name) ", - "interval": "", - "intervalFactor": 1, - "legendFormat": "limit", - "refId": "B", - "step": 1 - }, - { - "expr": "sum(container_spec_cpu_shares{container_name=\"$container\", namespace=\"$namespace\"} / 1024) by (namespace,container_name) / count(container_spec_cpu_shares{container_name=\"$container\", namespace=\"$namespace\"}) by (namespace,container_name) ", - "interval": "", - "intervalFactor": 1, - "legendFormat": "request", - "refId": "C", - "step": 1 - } - ], - "thresholds": [], - "timeFrom": null, - "timeShift": null, - "title": "Cpu usage (avg per pod)", - "tooltip": { - "msResolution": false, - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "none", - "label": "cores", - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ] - }, - { - "aliasColors": {}, - "bars": false, - "datasource": "${DS_PROMETHEUS}", - "editable": true, - "error": false, - "fill": 1, - "id": 9, - "legend": { - "alignAsTable": true, - "avg": true, - "current": false, - "max": true, - "min": false, - "rightSide": true, - "show": true, - "sort": "avg", - "sortDesc": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 1, - "links": [], - "nullPointMode": "connected", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "span": 6, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "sum(container_memory_usage_bytes{container_name=\"$container\", namespace=\"$namespace\"}) by (namespace,container_name) / count(container_memory_usage_bytes{container_name=\"$container\", namespace=\"$namespace\"}) by (namespace,container_name) ", - "intervalFactor": 1, - "legendFormat": "actual", - "metric": "", - "refId": "A", - "step": 1 - }, - { - "expr": "sum(container_spec_memory_limit_bytes{container_name=\"$container\", namespace=\"$namespace\"}) by (namespace,container_name) / count(container_memory_usage_bytes{container_name=\"$container\", namespace=\"$namespace\"}) by (namespace,container_name) ", - "interval": "", - "intervalFactor": 1, - "legendFormat": "limit", - "refId": "B", - "step": 1 - } - ], - "thresholds": [], - "timeFrom": null, - "timeShift": null, - "title": "Memory usage (avg per pod)", - "tooltip": { - "msResolution": false, - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "bytes", - "label": null, - "logBase": 1, - "max": null, - "min": "0", - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ] - } - ], - "repeat": null, - "repeatIteration": null, - "repeatRowId": null, - "showTitle": false, - "title": "Usage per pod (average)", - "titleSize": "h6" - }, - { - "collapse": true, - "height": 259.4375, - "panels": [ - { - "aliasColors": {}, - "bars": false, - "datasource": "${DS_PROMETHEUS}", - "editable": true, - "error": false, - "fill": 1, - "grid": {}, - "id": 1, - "legend": { - "alignAsTable": true, - "avg": true, - "current": false, - "max": true, - "min": false, - "rightSide": true, - "show": true, - "sort": "avg", - "sortDesc": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 2, - "links": [], - "nullPointMode": "connected", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "span": 6, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "sum(irate(container_cpu_usage_seconds_total{container_name=\"$container\", namespace=\"$namespace\"}[30s])) by (namespace,container_name)", - "hide": false, - "interval": "", - "intervalFactor": 1, - "legendFormat": "actual", - "metric": "", - "refId": "A", - "step": 1 - }, - { - "expr": "sum(container_spec_cpu_quota{container_name=\"$container\", namespace=\"$namespace\"} / container_spec_cpu_period{container_name=\"$container\", namespace=\"$namespace\"}) by (namespace,container_name)", - "intervalFactor": 1, - "legendFormat": "limit", - "refId": "B", - "step": 1 - }, - { - "expr": "sum(container_spec_cpu_shares{container_name=\"$container\", namespace=\"$namespace\"} / 1024) by (namespace,container_name) ", - "intervalFactor": 1, - "legendFormat": "request", - "refId": "C", - "step": 1 - } - ], - "thresholds": [], - "timeFrom": null, - "timeShift": null, - "title": "Cpu usage (total)", - "tooltip": { - "msResolution": true, - "shared": false, - "sort": 0, - "value_type": "cumulative" - }, - "type": "graph", - "xaxis": { - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "none", - "label": "cores", - "logBase": 1, - "max": null, - "min": 0, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ] - }, - { - "aliasColors": {}, - "bars": false, - "datasource": "${DS_PROMETHEUS}", - "editable": true, - "error": false, - "fill": 1, - "grid": {}, - "id": 2, - "legend": { - "alignAsTable": true, - "avg": true, - "current": false, - "max": true, - "min": false, - "rightSide": true, - "show": true, - "sort": "avg", - "sortDesc": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 2, - "links": [], - "nullPointMode": "connected", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "span": 6, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "sum(container_memory_usage_bytes{container_name=\"$container\", namespace=\"$namespace\"}) by (namespace,container_name)", - "interval": "", - "intervalFactor": 1, - "legendFormat": "actual", - "refId": "A", - "step": 1 - }, - { - "expr": "sum(container_spec_memory_limit_bytes{container_name=\"$container\", namespace=\"$namespace\"}) by (namespace,container_name)", - "intervalFactor": 1, - "legendFormat": "limit", - "refId": "B", - "step": 1 - } - ], - "thresholds": [], - "timeFrom": null, - "timeShift": null, - "title": "Memory usage (total)", - "tooltip": { - "msResolution": true, - "shared": false, - "sort": 0, - "value_type": "cumulative" - }, - "type": "graph", - "xaxis": { - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "bytes", - "label": null, - "logBase": 1, - "max": null, - "min": "0", - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ] - } - ], - "repeat": null, - "repeatIteration": null, - "repeatRowId": null, - "showTitle": false, - "title": "Usage total", - "titleSize": "h6" - } - ], - "schemaVersion": 14, - "style": "dark", - "tags": [], - "templating": { - "list": [ - { - "allValue": ".+", - "current": {}, - "datasource": "${DS_PROMETHEUS}", - "hide": 0, - "includeAll": false, - "label": null, - "multi": false, - "name": "namespace", - "options": [], - "query": "label_values(container_memory_usage_bytes{namespace=~\".+\",container_name!=\"POD\"},namespace)", - "refresh": 1, - "regex": "", - "sort": 1, - "tagValuesQuery": null, - "tags": [], - "tagsQuery": null, - "type": "query", - "useTags": false - }, - { - "allValue": ".+", - "current": {}, - "datasource": "${DS_PROMETHEUS}", - "hide": 0, - "includeAll": false, - "label": null, - "multi": false, - "name": "container", - "options": [], - "query": "label_values(container_memory_usage_bytes{namespace=~\"$namespace\",container_name!=\"POD\"},container_name)", - "refresh": 1, - "regex": "", - "sort": 1, - "tagValuesQuery": null, - "tags": [], - "tagsQuery": null, - "type": "query", - "useTags": false - } - ] - }, - "time": { - "from": "now-3h", - "to": "now" - }, - "timepicker": { - "refresh_intervals": [ - "5s", - "10s", - "30s", - "1m", - "5m", - "15m", - "30m", - "1h", - "2h", - "1d" - ], - "time_options": [ - "5m", - "15m", - "1h", - "6h", - "12h", - "24h", - "2d", - "7d", - "30d" - ] - }, - "timezone": "browser", - "title": "Kubernetes App Metrics", - "version": 37, - "description": "After selecting your namespace and container you get a wealth of metrics like request rate, error rate, response times, pod count, cpu and memory usage. You can view cpu and memory usage in a variety of ways, compared to the limit, compared to the request, per pod, average per pod, etc." -} \ No newline at end of file diff --git a/heapster/grafana-dashboard/kubernetes-cluster-monitoring-via-prometheus_rev3.json b/heapster/grafana-dashboard/kubernetes-cluster-monitoring-via-prometheus_rev3.json deleted file mode 100644 index e5fb269..0000000 --- a/heapster/grafana-dashboard/kubernetes-cluster-monitoring-via-prometheus_rev3.json +++ /dev/null @@ -1,2079 +0,0 @@ -{ - "__inputs": [ - { - "name": "DS_PROMETHEUS", - "label": "Prometheus", - "description": "", - "type": "datasource", - "pluginId": "prometheus", - "pluginName": "Prometheus" - } - ], - "__requires": [ - { - "type": "panel", - "id": "graph", - "name": "Graph", - "version": "" - }, - { - "type": "panel", - "id": "singlestat", - "name": "Singlestat", - "version": "" - }, - { - "type": "grafana", - "id": "grafana", - "name": "Grafana", - "version": "3.1.1" - }, - { - "type": "datasource", - "id": "prometheus", - "name": "Prometheus", - "version": "1.3.0" - } - ], - "id": null, - "title": "Kubernetes cluster monitoring (via Prometheus)", - "description": "Monitors Kubernetes cluster using Prometheus. Shows overall cluster CPU / Memory / Filesystem usage as well as individual pod, containers, systemd services statistics. Uses cAdvisor metrics only.", - "tags": [ - "kubernetes" - ], - "style": "dark", - "timezone": "browser", - "editable": true, - "hideControls": false, - "sharedCrosshair": false, - "rows": [ - { - "collapse": false, - "editable": true, - "height": "200px", - "panels": [ - { - "aliasColors": {}, - "bars": false, - "datasource": "${DS_PROMETHEUS}", - "decimals": 2, - "editable": true, - "error": false, - "fill": 1, - "grid": { - "threshold1": null, - "threshold1Color": "rgba(216, 200, 27, 0.27)", - "threshold2": null, - "threshold2Color": "rgba(234, 112, 112, 0.22)", - "thresholdLine": false - }, - "height": "200px", - "id": 32, - "isNew": true, - "legend": { - "alignAsTable": false, - "avg": true, - "current": true, - "max": false, - "min": false, - "rightSide": false, - "show": false, - "sideWidth": 200, - "sort": "current", - "sortDesc": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 2, - "links": [], - "nullPointMode": "connected", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "span": 12, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "sum (rate (container_network_receive_bytes_total{kubernetes_io_hostname=~\"^$Node$\"}[1m]))", - "interval": "10s", - "intervalFactor": 1, - "legendFormat": "Received", - "metric": "network", - "refId": "A", - "step": 10 - }, - { - "expr": "- sum (rate (container_network_transmit_bytes_total{kubernetes_io_hostname=~\"^$Node$\"}[1m]))", - "interval": "10s", - "intervalFactor": 1, - "legendFormat": "Sent", - "metric": "network", - "refId": "B", - "step": 10 - } - ], - "timeFrom": null, - "timeShift": null, - "title": "Network I/O pressure", - "tooltip": { - "msResolution": false, - "shared": true, - "sort": 0, - "value_type": "cumulative" - }, - "transparent": false, - "type": "graph", - "xaxis": { - "show": true - }, - "yaxes": [ - { - "format": "Bps", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "Bps", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": false - } - ] - } - ], - "title": "Network I/O pressure" - }, - { - "collapse": false, - "editable": true, - "height": "250px", - "panels": [ - { - "cacheTimeout": null, - "colorBackground": false, - "colorValue": true, - "colors": [ - "rgba(50, 172, 45, 0.97)", - "rgba(237, 129, 40, 0.89)", - "rgba(245, 54, 54, 0.9)" - ], - "datasource": "${DS_PROMETHEUS}", - "editable": true, - "error": false, - "format": "percent", - "gauge": { - "maxValue": 100, - "minValue": 0, - "show": true, - "thresholdLabels": false, - "thresholdMarkers": true - }, - "height": "180px", - "id": 4, - "interval": null, - "isNew": true, - "links": [], - "mappingType": 1, - "mappingTypes": [ - { - "name": "value to text", - "value": 1 - }, - { - "name": "range to text", - "value": 2 - } - ], - "maxDataPoints": 100, - "nullPointMode": "connected", - "nullText": null, - "postfix": "", - "postfixFontSize": "50%", - "prefix": "", - "prefixFontSize": "50%", - "rangeMaps": [ - { - "from": "null", - "text": "N/A", - "to": "null" - } - ], - "span": 4, - "sparkline": { - "fillColor": "rgba(31, 118, 189, 0.18)", - "full": false, - "lineColor": "rgb(31, 120, 193)", - "show": false - }, - "targets": [ - { - "expr": "sum (container_memory_working_set_bytes{id=\"/\",kubernetes_io_hostname=~\"^$Node$\"}) / sum (machine_memory_bytes{kubernetes_io_hostname=~\"^$Node$\"}) * 100", - "interval": "10s", - "intervalFactor": 1, - "refId": "A", - "step": 10 - } - ], - "thresholds": "65, 90", - "title": "Cluster memory usage", - "transparent": false, - "type": "singlestat", - "valueFontSize": "80%", - "valueMaps": [ - { - "op": "=", - "text": "N/A", - "value": "null" - } - ], - "valueName": "current" - }, - { - "cacheTimeout": null, - "colorBackground": false, - "colorValue": true, - "colors": [ - "rgba(50, 172, 45, 0.97)", - "rgba(237, 129, 40, 0.89)", - "rgba(245, 54, 54, 0.9)" - ], - "datasource": "${DS_PROMETHEUS}", - "decimals": 2, - "editable": true, - "error": false, - "format": "percent", - "gauge": { - "maxValue": 100, - "minValue": 0, - "show": true, - "thresholdLabels": false, - "thresholdMarkers": true - }, - "height": "180px", - "id": 6, - "interval": null, - "isNew": true, - "links": [], - "mappingType": 1, - "mappingTypes": [ - { - "name": "value to text", - "value": 1 - }, - { - "name": "range to text", - "value": 2 - } - ], - "maxDataPoints": 100, - "nullPointMode": "connected", - "nullText": null, - "postfix": "", - "postfixFontSize": "50%", - "prefix": "", - "prefixFontSize": "50%", - "rangeMaps": [ - { - "from": "null", - "text": "N/A", - "to": "null" - } - ], - "span": 4, - "sparkline": { - "fillColor": "rgba(31, 118, 189, 0.18)", - "full": false, - "lineColor": "rgb(31, 120, 193)", - "show": false - }, - "targets": [ - { - "expr": "sum (rate (container_cpu_usage_seconds_total{id=\"/\",kubernetes_io_hostname=~\"^$Node$\"}[1m])) / sum (machine_cpu_cores{kubernetes_io_hostname=~\"^$Node$\"}) * 100", - "interval": "10s", - "intervalFactor": 1, - "refId": "A", - "step": 10 - } - ], - "thresholds": "65, 90", - "title": "Cluster CPU usage (1m avg)", - "type": "singlestat", - "valueFontSize": "80%", - "valueMaps": [ - { - "op": "=", - "text": "N/A", - "value": "null" - } - ], - "valueName": "current" - }, - { - "cacheTimeout": null, - "colorBackground": false, - "colorValue": true, - "colors": [ - "rgba(50, 172, 45, 0.97)", - "rgba(237, 129, 40, 0.89)", - "rgba(245, 54, 54, 0.9)" - ], - "datasource": "${DS_PROMETHEUS}", - "decimals": 2, - "editable": true, - "error": false, - "format": "percent", - "gauge": { - "maxValue": 100, - "minValue": 0, - "show": true, - "thresholdLabels": false, - "thresholdMarkers": true - }, - "height": "180px", - "id": 7, - "interval": null, - "isNew": true, - "links": [], - "mappingType": 1, - "mappingTypes": [ - { - "name": "value to text", - "value": 1 - }, - { - "name": "range to text", - "value": 2 - } - ], - "maxDataPoints": 100, - "nullPointMode": "connected", - "nullText": null, - "postfix": "", - "postfixFontSize": "50%", - "prefix": "", - "prefixFontSize": "50%", - "rangeMaps": [ - { - "from": "null", - "text": "N/A", - "to": "null" - } - ], - "span": 4, - "sparkline": { - "fillColor": "rgba(31, 118, 189, 0.18)", - "full": false, - "lineColor": "rgb(31, 120, 193)", - "show": false - }, - "targets": [ - { - "expr": "sum (container_fs_usage_bytes{device=~\"^/dev/[sv]d[a-z][1-9]$\",id=\"/\",kubernetes_io_hostname=~\"^$Node$\"}) / sum (container_fs_limit_bytes{device=~\"^/dev/[sv]d[a-z][1-9]$\",id=\"/\",kubernetes_io_hostname=~\"^$Node$\"}) * 100", - "interval": "10s", - "intervalFactor": 1, - "legendFormat": "", - "metric": "", - "refId": "A", - "step": 10 - } - ], - "thresholds": "65, 90", - "title": "Cluster filesystem usage", - "type": "singlestat", - "valueFontSize": "80%", - "valueMaps": [ - { - "op": "=", - "text": "N/A", - "value": "null" - } - ], - "valueName": "current" - }, - { - "cacheTimeout": null, - "colorBackground": false, - "colorValue": false, - "colors": [ - "rgba(50, 172, 45, 0.97)", - "rgba(237, 129, 40, 0.89)", - "rgba(245, 54, 54, 0.9)" - ], - "datasource": "${DS_PROMETHEUS}", - "decimals": 2, - "editable": true, - "error": false, - "format": "bytes", - "gauge": { - "maxValue": 100, - "minValue": 0, - "show": false, - "thresholdLabels": false, - "thresholdMarkers": true - }, - "height": "1px", - "id": 9, - "interval": null, - "isNew": true, - "links": [], - "mappingType": 1, - "mappingTypes": [ - { - "name": "value to text", - "value": 1 - }, - { - "name": "range to text", - "value": 2 - } - ], - "maxDataPoints": 100, - "nullPointMode": "connected", - "nullText": null, - "postfix": "", - "postfixFontSize": "20%", - "prefix": "", - "prefixFontSize": "20%", - "rangeMaps": [ - { - "from": "null", - "text": "N/A", - "to": "null" - } - ], - "span": 2, - "sparkline": { - "fillColor": "rgba(31, 118, 189, 0.18)", - "full": false, - "lineColor": "rgb(31, 120, 193)", - "show": false - }, - "targets": [ - { - "expr": "sum (container_memory_working_set_bytes{id=\"/\",kubernetes_io_hostname=~\"^$Node$\"})", - "interval": "10s", - "intervalFactor": 1, - "refId": "A", - "step": 10 - } - ], - "thresholds": "", - "title": "Used", - "type": "singlestat", - "valueFontSize": "50%", - "valueMaps": [ - { - "op": "=", - "text": "N/A", - "value": "null" - } - ], - "valueName": "current" - }, - { - "cacheTimeout": null, - "colorBackground": false, - "colorValue": false, - "colors": [ - "rgba(50, 172, 45, 0.97)", - "rgba(237, 129, 40, 0.89)", - "rgba(245, 54, 54, 0.9)" - ], - "datasource": "${DS_PROMETHEUS}", - "decimals": 2, - "editable": true, - "error": false, - "format": "bytes", - "gauge": { - "maxValue": 100, - "minValue": 0, - "show": false, - "thresholdLabels": false, - "thresholdMarkers": true - }, - "height": "1px", - "id": 10, - "interval": null, - "isNew": true, - "links": [], - "mappingType": 1, - "mappingTypes": [ - { - "name": "value to text", - "value": 1 - }, - { - "name": "range to text", - "value": 2 - } - ], - "maxDataPoints": 100, - "nullPointMode": "connected", - "nullText": null, - "postfix": "", - "postfixFontSize": "50%", - "prefix": "", - "prefixFontSize": "50%", - "rangeMaps": [ - { - "from": "null", - "text": "N/A", - "to": "null" - } - ], - "span": 2, - "sparkline": { - "fillColor": "rgba(31, 118, 189, 0.18)", - "full": false, - "lineColor": "rgb(31, 120, 193)", - "show": false - }, - "targets": [ - { - "expr": "sum (machine_memory_bytes{kubernetes_io_hostname=~\"^$Node$\"})", - "interval": "10s", - "intervalFactor": 1, - "refId": "A", - "step": 10 - } - ], - "thresholds": "", - "title": "Total", - "type": "singlestat", - "valueFontSize": "50%", - "valueMaps": [ - { - "op": "=", - "text": "N/A", - "value": "null" - } - ], - "valueName": "current" - }, - { - "cacheTimeout": null, - "colorBackground": false, - "colorValue": false, - "colors": [ - "rgba(50, 172, 45, 0.97)", - "rgba(237, 129, 40, 0.89)", - "rgba(245, 54, 54, 0.9)" - ], - "datasource": "${DS_PROMETHEUS}", - "decimals": 2, - "editable": true, - "error": false, - "format": "none", - "gauge": { - "maxValue": 100, - "minValue": 0, - "show": false, - "thresholdLabels": false, - "thresholdMarkers": true - }, - "height": "1px", - "id": 11, - "interval": null, - "isNew": true, - "links": [], - "mappingType": 1, - "mappingTypes": [ - { - "name": "value to text", - "value": 1 - }, - { - "name": "range to text", - "value": 2 - } - ], - "maxDataPoints": 100, - "nullPointMode": "connected", - "nullText": null, - "postfix": " cores", - "postfixFontSize": "30%", - "prefix": "", - "prefixFontSize": "50%", - "rangeMaps": [ - { - "from": "null", - "text": "N/A", - "to": "null" - } - ], - "span": 2, - "sparkline": { - "fillColor": "rgba(31, 118, 189, 0.18)", - "full": false, - "lineColor": "rgb(31, 120, 193)", - "show": false - }, - "targets": [ - { - "expr": "sum (rate (container_cpu_usage_seconds_total{id=\"/\",kubernetes_io_hostname=~\"^$Node$\"}[1m]))", - "interval": "10s", - "intervalFactor": 1, - "refId": "A", - "step": 10 - } - ], - "thresholds": "", - "title": "Used", - "type": "singlestat", - "valueFontSize": "50%", - "valueMaps": [ - { - "op": "=", - "text": "N/A", - "value": "null" - } - ], - "valueName": "current" - }, - { - "cacheTimeout": null, - "colorBackground": false, - "colorValue": false, - "colors": [ - "rgba(50, 172, 45, 0.97)", - "rgba(237, 129, 40, 0.89)", - "rgba(245, 54, 54, 0.9)" - ], - "datasource": "${DS_PROMETHEUS}", - "decimals": 2, - "editable": true, - "error": false, - "format": "none", - "gauge": { - "maxValue": 100, - "minValue": 0, - "show": false, - "thresholdLabels": false, - "thresholdMarkers": true - }, - "height": "1px", - "id": 12, - "interval": null, - "isNew": true, - "links": [], - "mappingType": 1, - "mappingTypes": [ - { - "name": "value to text", - "value": 1 - }, - { - "name": "range to text", - "value": 2 - } - ], - "maxDataPoints": 100, - "nullPointMode": "connected", - "nullText": null, - "postfix": " cores", - "postfixFontSize": "30%", - "prefix": "", - "prefixFontSize": "50%", - "rangeMaps": [ - { - "from": "null", - "text": "N/A", - "to": "null" - } - ], - "span": 2, - "sparkline": { - "fillColor": "rgba(31, 118, 189, 0.18)", - "full": false, - "lineColor": "rgb(31, 120, 193)", - "show": false - }, - "targets": [ - { - "expr": "sum (machine_cpu_cores{kubernetes_io_hostname=~\"^$Node$\"})", - "interval": "10s", - "intervalFactor": 1, - "refId": "A", - "step": 10 - } - ], - "thresholds": "", - "title": "Total", - "type": "singlestat", - "valueFontSize": "50%", - "valueMaps": [ - { - "op": "=", - "text": "N/A", - "value": "null" - } - ], - "valueName": "current" - }, - { - "cacheTimeout": null, - "colorBackground": false, - "colorValue": false, - "colors": [ - "rgba(50, 172, 45, 0.97)", - "rgba(237, 129, 40, 0.89)", - "rgba(245, 54, 54, 0.9)" - ], - "datasource": "${DS_PROMETHEUS}", - "decimals": 2, - "editable": true, - "error": false, - "format": "bytes", - "gauge": { - "maxValue": 100, - "minValue": 0, - "show": false, - "thresholdLabels": false, - "thresholdMarkers": true - }, - "height": "1px", - "id": 13, - "interval": null, - "isNew": true, - "links": [], - "mappingType": 1, - "mappingTypes": [ - { - "name": "value to text", - "value": 1 - }, - { - "name": "range to text", - "value": 2 - } - ], - "maxDataPoints": 100, - "nullPointMode": "connected", - "nullText": null, - "postfix": "", - "postfixFontSize": "50%", - "prefix": "", - "prefixFontSize": "50%", - "rangeMaps": [ - { - "from": "null", - "text": "N/A", - "to": "null" - } - ], - "span": 2, - "sparkline": { - "fillColor": "rgba(31, 118, 189, 0.18)", - "full": false, - "lineColor": "rgb(31, 120, 193)", - "show": false - }, - "targets": [ - { - "expr": "sum (container_fs_usage_bytes{device=~\"^/dev/[sv]d[a-z][1-9]$\",id=\"/\",kubernetes_io_hostname=~\"^$Node$\"})", - "interval": "10s", - "intervalFactor": 1, - "refId": "A", - "step": 10 - } - ], - "thresholds": "", - "title": "Used", - "type": "singlestat", - "valueFontSize": "50%", - "valueMaps": [ - { - "op": "=", - "text": "N/A", - "value": "null" - } - ], - "valueName": "current" - }, - { - "cacheTimeout": null, - "colorBackground": false, - "colorValue": false, - "colors": [ - "rgba(50, 172, 45, 0.97)", - "rgba(237, 129, 40, 0.89)", - "rgba(245, 54, 54, 0.9)" - ], - "datasource": "${DS_PROMETHEUS}", - "decimals": 2, - "editable": true, - "error": false, - "format": "bytes", - "gauge": { - "maxValue": 100, - "minValue": 0, - "show": false, - "thresholdLabels": false, - "thresholdMarkers": true - }, - "height": "1px", - "id": 14, - "interval": null, - "isNew": true, - "links": [], - "mappingType": 1, - "mappingTypes": [ - { - "name": "value to text", - "value": 1 - }, - { - "name": "range to text", - "value": 2 - } - ], - "maxDataPoints": 100, - "nullPointMode": "connected", - "nullText": null, - "postfix": "", - "postfixFontSize": "50%", - "prefix": "", - "prefixFontSize": "50%", - "rangeMaps": [ - { - "from": "null", - "text": "N/A", - "to": "null" - } - ], - "span": 2, - "sparkline": { - "fillColor": "rgba(31, 118, 189, 0.18)", - "full": false, - "lineColor": "rgb(31, 120, 193)", - "show": false - }, - "targets": [ - { - "expr": "sum (container_fs_limit_bytes{device=~\"^/dev/[sv]d[a-z][1-9]$\",id=\"/\",kubernetes_io_hostname=~\"^$Node$\"})", - "interval": "10s", - "intervalFactor": 1, - "refId": "A", - "step": 10 - } - ], - "thresholds": "", - "title": "Total", - "type": "singlestat", - "valueFontSize": "50%", - "valueMaps": [ - { - "op": "=", - "text": "N/A", - "value": "null" - } - ], - "valueName": "current" - } - ], - "showTitle": false, - "title": "Total usage" - }, - { - "collapse": false, - "editable": true, - "height": "250px", - "panels": [ - { - "aliasColors": {}, - "bars": false, - "datasource": "${DS_PROMETHEUS}", - "decimals": 3, - "editable": true, - "error": false, - "fill": 0, - "grid": { - "threshold1": null, - "threshold1Color": "rgba(216, 200, 27, 0.27)", - "threshold2": null, - "threshold2Color": "rgba(234, 112, 112, 0.22)" - }, - "height": "", - "id": 17, - "isNew": true, - "legend": { - "alignAsTable": true, - "avg": true, - "current": true, - "max": false, - "min": false, - "rightSide": true, - "show": true, - "sort": "current", - "sortDesc": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 2, - "links": [], - "nullPointMode": "connected", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "span": 12, - "stack": false, - "steppedLine": true, - "targets": [ - { - "expr": "sum (rate (container_cpu_usage_seconds_total{image!=\"\",name=~\"^k8s_.*\",kubernetes_io_hostname=~\"^$Node$\"}[1m])) by (pod_name)", - "interval": "10s", - "intervalFactor": 1, - "legendFormat": "{{ pod_name }}", - "metric": "container_cpu", - "refId": "A", - "step": 10 - } - ], - "timeFrom": null, - "timeShift": null, - "title": "Pods CPU usage (1m avg)", - "tooltip": { - "msResolution": true, - "shared": true, - "sort": 2, - "value_type": "cumulative" - }, - "transparent": false, - "type": "graph", - "xaxis": { - "show": true - }, - "yaxes": [ - { - "format": "none", - "label": "cores", - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": false - } - ] - } - ], - "showTitle": false, - "title": "Pods CPU usage" - }, - { - "collapse": true, - "editable": true, - "height": "250px", - "panels": [ - { - "aliasColors": {}, - "bars": false, - "datasource": "${DS_PROMETHEUS}", - "decimals": 3, - "editable": true, - "error": false, - "fill": 0, - "grid": { - "threshold1": null, - "threshold1Color": "rgba(216, 200, 27, 0.27)", - "threshold2": null, - "threshold2Color": "rgba(234, 112, 112, 0.22)" - }, - "height": "", - "id": 23, - "isNew": true, - "legend": { - "alignAsTable": true, - "avg": true, - "current": true, - "max": false, - "min": false, - "rightSide": true, - "show": true, - "sort": "current", - "sortDesc": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 2, - "links": [], - "nullPointMode": "connected", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "span": 12, - "stack": false, - "steppedLine": true, - "targets": [ - { - "expr": "sum (rate (container_cpu_usage_seconds_total{systemd_service_name!=\"\",kubernetes_io_hostname=~\"^$Node$\"}[1m])) by (systemd_service_name)", - "hide": false, - "interval": "10s", - "intervalFactor": 1, - "legendFormat": "{{ systemd_service_name }}", - "metric": "container_cpu", - "refId": "A", - "step": 10 - } - ], - "timeFrom": null, - "timeShift": null, - "title": "System services CPU usage (1m avg)", - "tooltip": { - "msResolution": true, - "shared": true, - "sort": 2, - "value_type": "cumulative" - }, - "type": "graph", - "xaxis": { - "show": true - }, - "yaxes": [ - { - "format": "none", - "label": "cores", - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": false - } - ] - } - ], - "title": "System services CPU usage" - }, - { - "collapse": true, - "editable": true, - "height": "250px", - "panels": [ - { - "aliasColors": {}, - "bars": false, - "datasource": "${DS_PROMETHEUS}", - "decimals": 3, - "editable": true, - "error": false, - "fill": 0, - "grid": { - "threshold1": null, - "threshold1Color": "rgba(216, 200, 27, 0.27)", - "threshold2": null, - "threshold2Color": "rgba(234, 112, 112, 0.22)" - }, - "height": "", - "id": 24, - "isNew": true, - "legend": { - "alignAsTable": true, - "avg": true, - "current": true, - "hideEmpty": false, - "hideZero": false, - "max": false, - "min": false, - "rightSide": true, - "show": true, - "sideWidth": null, - "sort": "current", - "sortDesc": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 2, - "links": [], - "nullPointMode": "connected", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "span": 12, - "stack": false, - "steppedLine": true, - "targets": [ - { - "expr": "sum (rate (container_cpu_usage_seconds_total{image!=\"\",name=~\"^k8s_.*\",container_name!=\"POD\",kubernetes_io_hostname=~\"^$Node$\"}[1m])) by (container_name, pod_name)", - "hide": false, - "interval": "10s", - "intervalFactor": 1, - "legendFormat": "pod: {{ pod_name }} | {{ container_name }}", - "metric": "container_cpu", - "refId": "A", - "step": 10 - }, - { - "expr": "sum (rate (container_cpu_usage_seconds_total{image!=\"\",name!~\"^k8s_.*\",kubernetes_io_hostname=~\"^$Node$\"}[1m])) by (kubernetes_io_hostname, name, image)", - "hide": false, - "interval": "10s", - "intervalFactor": 1, - "legendFormat": "docker: {{ kubernetes_io_hostname }} | {{ image }} ({{ name }})", - "metric": "container_cpu", - "refId": "B", - "step": 10 - }, - { - "expr": "sum (rate (container_cpu_usage_seconds_total{rkt_container_name!=\"\",kubernetes_io_hostname=~\"^$Node$\"}[1m])) by (kubernetes_io_hostname, rkt_container_name)", - "interval": "10s", - "intervalFactor": 1, - "legendFormat": "rkt: {{ kubernetes_io_hostname }} | {{ rkt_container_name }}", - "metric": "container_cpu", - "refId": "C", - "step": 10 - } - ], - "timeFrom": null, - "timeShift": null, - "title": "Containers CPU usage (1m avg)", - "tooltip": { - "msResolution": true, - "shared": true, - "sort": 2, - "value_type": "cumulative" - }, - "type": "graph", - "xaxis": { - "show": true - }, - "yaxes": [ - { - "format": "none", - "label": "cores", - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": false - } - ] - } - ], - "title": "Containers CPU usage" - }, - { - "collapse": true, - "editable": true, - "height": "500px", - "panels": [ - { - "aliasColors": {}, - "bars": false, - "datasource": "${DS_PROMETHEUS}", - "decimals": 3, - "editable": true, - "error": false, - "fill": 0, - "grid": { - "threshold1": null, - "threshold1Color": "rgba(216, 200, 27, 0.27)", - "threshold2": null, - "threshold2Color": "rgba(234, 112, 112, 0.22)" - }, - "id": 20, - "isNew": true, - "legend": { - "alignAsTable": true, - "avg": true, - "current": true, - "max": false, - "min": false, - "rightSide": false, - "show": true, - "sort": "current", - "sortDesc": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 2, - "links": [], - "nullPointMode": "connected", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "span": 12, - "stack": false, - "steppedLine": true, - "targets": [ - { - "expr": "sum (rate (container_cpu_usage_seconds_total{id!=\"/\",kubernetes_io_hostname=~\"^$Node$\"}[1m])) by (id)", - "hide": false, - "interval": "10s", - "intervalFactor": 1, - "legendFormat": "{{ id }}", - "metric": "container_cpu", - "refId": "A", - "step": 10 - } - ], - "timeFrom": null, - "timeShift": null, - "title": "All processes CPU usage (1m avg)", - "tooltip": { - "msResolution": true, - "shared": true, - "sort": 2, - "value_type": "cumulative" - }, - "type": "graph", - "xaxis": { - "show": true - }, - "yaxes": [ - { - "format": "none", - "label": "cores", - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": false - } - ] - } - ], - "repeat": null, - "showTitle": false, - "title": "All processes CPU usage" - }, - { - "collapse": false, - "editable": true, - "height": "250px", - "panels": [ - { - "aliasColors": {}, - "bars": false, - "datasource": "${DS_PROMETHEUS}", - "decimals": 2, - "editable": true, - "error": false, - "fill": 0, - "grid": { - "threshold1": null, - "threshold1Color": "rgba(216, 200, 27, 0.27)", - "threshold2": null, - "threshold2Color": "rgba(234, 112, 112, 0.22)" - }, - "id": 25, - "isNew": true, - "legend": { - "alignAsTable": true, - "avg": true, - "current": true, - "max": false, - "min": false, - "rightSide": true, - "show": true, - "sideWidth": 200, - "sort": "current", - "sortDesc": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 2, - "links": [], - "nullPointMode": "connected", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "span": 12, - "stack": false, - "steppedLine": true, - "targets": [ - { - "expr": "sum (container_memory_working_set_bytes{image!=\"\",name=~\"^k8s_.*\",kubernetes_io_hostname=~\"^$Node$\"}) by (pod_name)", - "interval": "10s", - "intervalFactor": 1, - "legendFormat": "{{ pod_name }}", - "metric": "container_memory_usage:sort_desc", - "refId": "A", - "step": 10 - } - ], - "timeFrom": null, - "timeShift": null, - "title": "Pods memory usage", - "tooltip": { - "msResolution": false, - "shared": true, - "sort": 2, - "value_type": "cumulative" - }, - "type": "graph", - "xaxis": { - "show": true - }, - "yaxes": [ - { - "format": "bytes", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": false - } - ] - } - ], - "title": "Pods memory usage" - }, - { - "collapse": true, - "editable": true, - "height": "250px", - "panels": [ - { - "aliasColors": {}, - "bars": false, - "datasource": "${DS_PROMETHEUS}", - "decimals": 2, - "editable": true, - "error": false, - "fill": 0, - "grid": { - "threshold1": null, - "threshold1Color": "rgba(216, 200, 27, 0.27)", - "threshold2": null, - "threshold2Color": "rgba(234, 112, 112, 0.22)" - }, - "id": 26, - "isNew": true, - "legend": { - "alignAsTable": true, - "avg": true, - "current": true, - "max": false, - "min": false, - "rightSide": true, - "show": true, - "sideWidth": 200, - "sort": "current", - "sortDesc": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 2, - "links": [], - "nullPointMode": "connected", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "span": 12, - "stack": false, - "steppedLine": true, - "targets": [ - { - "expr": "sum (container_memory_working_set_bytes{systemd_service_name!=\"\",kubernetes_io_hostname=~\"^$Node$\"}) by (systemd_service_name)", - "interval": "10s", - "intervalFactor": 1, - "legendFormat": "{{ systemd_service_name }}", - "metric": "container_memory_usage:sort_desc", - "refId": "A", - "step": 10 - } - ], - "timeFrom": null, - "timeShift": null, - "title": "System services memory usage", - "tooltip": { - "msResolution": false, - "shared": true, - "sort": 2, - "value_type": "cumulative" - }, - "type": "graph", - "xaxis": { - "show": true - }, - "yaxes": [ - { - "format": "bytes", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": false - } - ] - } - ], - "title": "System services memory usage" - }, - { - "collapse": true, - "editable": true, - "height": "250px", - "panels": [ - { - "aliasColors": {}, - "bars": false, - "datasource": "${DS_PROMETHEUS}", - "decimals": 2, - "editable": true, - "error": false, - "fill": 0, - "grid": { - "threshold1": null, - "threshold1Color": "rgba(216, 200, 27, 0.27)", - "threshold2": null, - "threshold2Color": "rgba(234, 112, 112, 0.22)" - }, - "id": 27, - "isNew": true, - "legend": { - "alignAsTable": true, - "avg": true, - "current": true, - "max": false, - "min": false, - "rightSide": true, - "show": true, - "sideWidth": 200, - "sort": "current", - "sortDesc": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 2, - "links": [], - "nullPointMode": "connected", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "span": 12, - "stack": false, - "steppedLine": true, - "targets": [ - { - "expr": "sum (container_memory_working_set_bytes{image!=\"\",name=~\"^k8s_.*\",container_name!=\"POD\",kubernetes_io_hostname=~\"^$Node$\"}) by (container_name, pod_name)", - "interval": "10s", - "intervalFactor": 1, - "legendFormat": "pod: {{ pod_name }} | {{ container_name }}", - "metric": "container_memory_usage:sort_desc", - "refId": "A", - "step": 10 - }, - { - "expr": "sum (container_memory_working_set_bytes{image!=\"\",name!~\"^k8s_.*\",kubernetes_io_hostname=~\"^$Node$\"}) by (kubernetes_io_hostname, name, image)", - "interval": "10s", - "intervalFactor": 1, - "legendFormat": "docker: {{ kubernetes_io_hostname }} | {{ image }} ({{ name }})", - "metric": "container_memory_usage:sort_desc", - "refId": "B", - "step": 10 - }, - { - "expr": "sum (container_memory_working_set_bytes{rkt_container_name!=\"\",kubernetes_io_hostname=~\"^$Node$\"}) by (kubernetes_io_hostname, rkt_container_name)", - "interval": "10s", - "intervalFactor": 1, - "legendFormat": "rkt: {{ kubernetes_io_hostname }} | {{ rkt_container_name }}", - "metric": "container_memory_usage:sort_desc", - "refId": "C", - "step": 10 - } - ], - "timeFrom": null, - "timeShift": null, - "title": "Containers memory usage", - "tooltip": { - "msResolution": false, - "shared": true, - "sort": 2, - "value_type": "cumulative" - }, - "type": "graph", - "xaxis": { - "show": true - }, - "yaxes": [ - { - "format": "bytes", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": false - } - ] - } - ], - "title": "Containers memory usage" - }, - { - "collapse": true, - "editable": true, - "height": "500px", - "panels": [ - { - "aliasColors": {}, - "bars": false, - "datasource": "${DS_PROMETHEUS}", - "decimals": 2, - "editable": true, - "error": false, - "fill": 0, - "grid": { - "threshold1": null, - "threshold1Color": "rgba(216, 200, 27, 0.27)", - "threshold2": null, - "threshold2Color": "rgba(234, 112, 112, 0.22)" - }, - "id": 28, - "isNew": true, - "legend": { - "alignAsTable": true, - "avg": true, - "current": true, - "max": false, - "min": false, - "rightSide": false, - "show": true, - "sideWidth": 200, - "sort": "current", - "sortDesc": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 2, - "links": [], - "nullPointMode": "connected", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "span": 12, - "stack": false, - "steppedLine": true, - "targets": [ - { - "expr": "sum (container_memory_working_set_bytes{id!=\"/\",kubernetes_io_hostname=~\"^$Node$\"}) by (id)", - "interval": "10s", - "intervalFactor": 1, - "legendFormat": "{{ id }}", - "metric": "container_memory_usage:sort_desc", - "refId": "A", - "step": 10 - } - ], - "timeFrom": null, - "timeShift": null, - "title": "All processes memory usage", - "tooltip": { - "msResolution": false, - "shared": true, - "sort": 2, - "value_type": "cumulative" - }, - "type": "graph", - "xaxis": { - "show": true - }, - "yaxes": [ - { - "format": "bytes", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": false - } - ] - } - ], - "title": "All processes memory usage" - }, - { - "collapse": false, - "editable": true, - "height": "250px", - "panels": [ - { - "aliasColors": {}, - "bars": false, - "datasource": "${DS_PROMETHEUS}", - "decimals": 2, - "editable": true, - "error": false, - "fill": 1, - "grid": { - "threshold1": null, - "threshold1Color": "rgba(216, 200, 27, 0.27)", - "threshold2": null, - "threshold2Color": "rgba(234, 112, 112, 0.22)" - }, - "id": 16, - "isNew": true, - "legend": { - "alignAsTable": true, - "avg": true, - "current": true, - "max": false, - "min": false, - "rightSide": true, - "show": true, - "sideWidth": 200, - "sort": "current", - "sortDesc": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 2, - "links": [], - "nullPointMode": "connected", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "span": 12, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "sum (rate (container_network_receive_bytes_total{image!=\"\",name=~\"^k8s_.*\",kubernetes_io_hostname=~\"^$Node$\"}[1m])) by (pod_name)", - "interval": "10s", - "intervalFactor": 1, - "legendFormat": "-> {{ pod_name }}", - "metric": "network", - "refId": "A", - "step": 10 - }, - { - "expr": "- sum (rate (container_network_transmit_bytes_total{image!=\"\",name=~\"^k8s_.*\",kubernetes_io_hostname=~\"^$Node$\"}[1m])) by (pod_name)", - "interval": "10s", - "intervalFactor": 1, - "legendFormat": "<- {{ pod_name }}", - "metric": "network", - "refId": "B", - "step": 10 - } - ], - "timeFrom": null, - "timeShift": null, - "title": "Pods network I/O (1m avg)", - "tooltip": { - "msResolution": false, - "shared": true, - "sort": 2, - "value_type": "cumulative" - }, - "type": "graph", - "xaxis": { - "show": true - }, - "yaxes": [ - { - "format": "Bps", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": false - } - ] - } - ], - "title": "Pods network I/O" - }, - { - "collapse": true, - "editable": true, - "height": "250px", - "panels": [ - { - "aliasColors": {}, - "bars": false, - "datasource": "${DS_PROMETHEUS}", - "decimals": 2, - "editable": true, - "error": false, - "fill": 1, - "grid": { - "threshold1": null, - "threshold1Color": "rgba(216, 200, 27, 0.27)", - "threshold2": null, - "threshold2Color": "rgba(234, 112, 112, 0.22)" - }, - "id": 30, - "isNew": true, - "legend": { - "alignAsTable": true, - "avg": true, - "current": true, - "max": false, - "min": false, - "rightSide": true, - "show": true, - "sideWidth": 200, - "sort": "current", - "sortDesc": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 2, - "links": [], - "nullPointMode": "connected", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "span": 12, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "sum (rate (container_network_receive_bytes_total{image!=\"\",name=~\"^k8s_.*\",kubernetes_io_hostname=~\"^$Node$\"}[1m])) by (container_name, pod_name)", - "hide": false, - "interval": "10s", - "intervalFactor": 1, - "legendFormat": "-> pod: {{ pod_name }} | {{ container_name }}", - "metric": "network", - "refId": "B", - "step": 10 - }, - { - "expr": "- sum (rate (container_network_transmit_bytes_total{image!=\"\",name=~\"^k8s_.*\",kubernetes_io_hostname=~\"^$Node$\"}[1m])) by (container_name, pod_name)", - "hide": false, - "interval": "10s", - "intervalFactor": 1, - "legendFormat": "<- pod: {{ pod_name }} | {{ container_name }}", - "metric": "network", - "refId": "D", - "step": 10 - }, - { - "expr": "sum (rate (container_network_receive_bytes_total{image!=\"\",name!~\"^k8s_.*\",kubernetes_io_hostname=~\"^$Node$\"}[1m])) by (kubernetes_io_hostname, name, image)", - "hide": false, - "interval": "10s", - "intervalFactor": 1, - "legendFormat": "-> docker: {{ kubernetes_io_hostname }} | {{ image }} ({{ name }})", - "metric": "network", - "refId": "A", - "step": 10 - }, - { - "expr": "- sum (rate (container_network_transmit_bytes_total{image!=\"\",name!~\"^k8s_.*\",kubernetes_io_hostname=~\"^$Node$\"}[1m])) by (kubernetes_io_hostname, name, image)", - "hide": false, - "interval": "10s", - "intervalFactor": 1, - "legendFormat": "<- docker: {{ kubernetes_io_hostname }} | {{ image }} ({{ name }})", - "metric": "network", - "refId": "C", - "step": 10 - }, - { - "expr": "sum (rate (container_network_transmit_bytes_total{rkt_container_name!=\"\",kubernetes_io_hostname=~\"^$Node$\"}[1m])) by (kubernetes_io_hostname, rkt_container_name)", - "hide": false, - "interval": "10s", - "intervalFactor": 1, - "legendFormat": "-> rkt: {{ kubernetes_io_hostname }} | {{ rkt_container_name }}", - "metric": "network", - "refId": "E", - "step": 10 - }, - { - "expr": "- sum (rate (container_network_transmit_bytes_total{rkt_container_name!=\"\",kubernetes_io_hostname=~\"^$Node$\"}[1m])) by (kubernetes_io_hostname, rkt_container_name)", - "hide": false, - "interval": "10s", - "intervalFactor": 1, - "legendFormat": "<- rkt: {{ kubernetes_io_hostname }} | {{ rkt_container_name }}", - "metric": "network", - "refId": "F", - "step": 10 - } - ], - "timeFrom": null, - "timeShift": null, - "title": "Containers network I/O (1m avg)", - "tooltip": { - "msResolution": false, - "shared": true, - "sort": 2, - "value_type": "cumulative" - }, - "type": "graph", - "xaxis": { - "show": true - }, - "yaxes": [ - { - "format": "Bps", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": false - } - ] - } - ], - "title": "Containers network I/O" - }, - { - "collapse": true, - "editable": true, - "height": "500px", - "panels": [ - { - "aliasColors": {}, - "bars": false, - "datasource": "${DS_PROMETHEUS}", - "decimals": 2, - "editable": true, - "error": false, - "fill": 1, - "grid": { - "threshold1": null, - "threshold1Color": "rgba(216, 200, 27, 0.27)", - "threshold2": null, - "threshold2Color": "rgba(234, 112, 112, 0.22)" - }, - "id": 29, - "isNew": true, - "legend": { - "alignAsTable": true, - "avg": true, - "current": true, - "max": false, - "min": false, - "rightSide": false, - "show": true, - "sideWidth": 200, - "sort": "current", - "sortDesc": true, - "total": false, - "values": true - }, - "lines": true, - "linewidth": 2, - "links": [], - "nullPointMode": "connected", - "percentage": false, - "pointradius": 5, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "span": 12, - "stack": false, - "steppedLine": false, - "targets": [ - { - "expr": "sum (rate (container_network_receive_bytes_total{id!=\"/\",kubernetes_io_hostname=~\"^$Node$\"}[1m])) by (id)", - "interval": "10s", - "intervalFactor": 1, - "legendFormat": "-> {{ id }}", - "metric": "network", - "refId": "A", - "step": 10 - }, - { - "expr": "- sum (rate (container_network_transmit_bytes_total{id!=\"/\",kubernetes_io_hostname=~\"^$Node$\"}[1m])) by (id)", - "interval": "10s", - "intervalFactor": 1, - "legendFormat": "<- {{ id }}", - "metric": "network", - "refId": "B", - "step": 10 - } - ], - "timeFrom": null, - "timeShift": null, - "title": "All processes network I/O (1m avg)", - "tooltip": { - "msResolution": false, - "shared": true, - "sort": 2, - "value_type": "cumulative" - }, - "type": "graph", - "xaxis": { - "show": true - }, - "yaxes": [ - { - "format": "Bps", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": false - } - ] - } - ], - "title": "All processes network I/O" - } - ], - "time": { - "from": "now-5m", - "to": "now" - }, - "timepicker": { - "refresh_intervals": [ - "5s", - "10s", - "30s", - "1m", - "5m", - "15m", - "30m", - "1h", - "2h", - "1d" - ], - "time_options": [ - "5m", - "15m", - "1h", - "6h", - "12h", - "24h", - "2d", - "7d", - "30d" - ] - }, - "templating": { - "list": [ - { - "allValue": ".*", - "current": {}, - "datasource": "${DS_PROMETHEUS}", - "hide": 0, - "includeAll": true, - "multi": false, - "name": "Node", - "options": [], - "query": "label_values(kubernetes_io_hostname)", - "refresh": 1, - "type": "query" - } - ] - }, - "annotations": { - "list": [] - }, - "refresh": "10s", - "schemaVersion": 12, - "version": 13, - "links": [], - "gnetId": 315 -} \ No newline at end of file diff --git a/heapster/grafana.yaml b/heapster/grafana.yaml deleted file mode 100644 index acd45cc..0000000 --- a/heapster/grafana.yaml +++ /dev/null @@ -1,82 +0,0 @@ -apiVersion: extensions/v1beta1 -kind: Deployment -metadata: - name: monitoring-grafana - namespace: kube-system -spec: - replicas: 1 - template: - metadata: - labels: - task: monitoring - k8s-app: grafana - spec: - nodeSelector: - node-role.kubernetes.io/master: "" - containers: - - name: grafana - image: k8s.gcr.io/heapster-grafana-amd64:v5.0.4 - ports: - - containerPort: 3000 - protocol: TCP - volumeMounts: - - mountPath: /etc/ssl/certs - name: ca-certificates - readOnly: true - env: - - name: INFLUXDB_HOST - value: monitoring-influxdb - - name: GF_SERVER_HTTP_PORT - value: "3000" - # The following env variables are required to make Grafana accessible via - # the kubernetes api-server proxy. On production clusters, we recommend - # removing these env variables, setup auth for grafana, and expose the grafana - # service using a LoadBalancer or a public IP. - # - name: GRAFANA_USER - # value: "admin" - # - name: GRAFANA_PASSWD - # value: "admin" - - name: GF_SECURITY_ADMIN_PASSWORD - value: "admin" - # - name: GF_AUTH_BASIC_ENABLED - # value: "false" - # - name: GF_AUTH_ANONYMOUS_ENABLED - # value: "true" - # - name: GF_AUTH_ANONYMOUS_ORG_ROLE - # value: Admin - - name: GF_SERVER_ROOT_URL - # If you're only using the API Server proxy, set this value instead: - # value: /api/v1/namespaces/kube-system/services/monitoring-grafana/proxy - value: / - volumes: - - name: ca-certificates - hostPath: - path: /etc/ssl/certs - # - name: grafana-storage - # emptyDir: {} - tolerations: - - key: node-role.kubernetes.io/master - effect: NoSchedule ---- -apiVersion: v1 -kind: Service -metadata: - labels: - # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons) - # If you are NOT using this as an addon, you should comment out this line. - kubernetes.io/cluster-service: 'true' - kubernetes.io/name: monitoring-grafana - name: monitoring-grafana - namespace: kube-system -spec: - # In a production setup, we recommend accessing Grafana through an external Loadbalancer - # or through a public IP. - # type: LoadBalancer - # You could also use NodePort to expose the service at a randomly-generated port - type: NodePort - ports: - - port: 80 - targetPort: 3000 - nodePort: 30006 - selector: - k8s-app: grafana diff --git a/heapster/heapster-rbac.yaml b/heapster/heapster-rbac.yaml deleted file mode 100644 index 10cce78..0000000 --- a/heapster/heapster-rbac.yaml +++ /dev/null @@ -1,39 +0,0 @@ ---- -kind: ClusterRoleBinding -apiVersion: rbac.authorization.k8s.io/v1 -metadata: - name: heapster -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: heapster -subjects: -- kind: ServiceAccount - name: heapster - namespace: kube-system - ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: heapster -rules: -- apiGroups: - - "" - resources: - - pods - - nodes - - nodes/stats - - namespaces - verbs: - - get - - list - - watch -- apiGroups: - - "extensions" - resources: - - deployments - verbs: - - get - - list - - watch diff --git a/heapster/heapster.yaml b/heapster/heapster.yaml deleted file mode 100644 index 95a92dd..0000000 --- a/heapster/heapster.yaml +++ /dev/null @@ -1,49 +0,0 @@ -apiVersion: v1 -kind: ServiceAccount -metadata: - name: heapster - namespace: kube-system ---- -apiVersion: extensions/v1beta1 -kind: Deployment -metadata: - name: heapster - namespace: kube-system -spec: - replicas: 1 - template: - metadata: - labels: - task: monitoring - k8s-app: heapster - spec: - serviceAccountName: heapster - containers: - - name: heapster - image: k8s.gcr.io/heapster-amd64:v1.5.4 - imagePullPolicy: IfNotPresent - command: - - /heapster - # - --source=kubernetes:https://kubernetes.default - # 10255 readonly端口已经作废 - - --source=kubernetes.summary_api:https://kubernetes.default?kubeletHttps=true&kubeletPort=10250&insecure=true - - --sink=influxdb:http://monitoring-influxdb.kube-system.svc:8086 - - --metric-resolution=30s ---- -apiVersion: v1 -kind: Service -metadata: - labels: - task: monitoring - # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons) - # If you are NOT using this as an addon, you should comment out this line. - kubernetes.io/cluster-service: 'true' - kubernetes.io/name: Heapster - name: heapster - namespace: kube-system -spec: - ports: - - port: 80 - targetPort: 8082 - selector: - k8s-app: heapster diff --git a/heapster/influxdb.yaml b/heapster/influxdb.yaml deleted file mode 100644 index e1edb6d..0000000 --- a/heapster/influxdb.yaml +++ /dev/null @@ -1,45 +0,0 @@ -apiVersion: extensions/v1beta1 -kind: Deployment -metadata: - name: monitoring-influxdb - namespace: kube-system -spec: - replicas: 1 - template: - metadata: - labels: - task: monitoring - k8s-app: influxdb - spec: - nodeSelector: - node-role.kubernetes.io/master: "" - containers: - - name: influxdb - image: k8s.gcr.io/heapster-influxdb-amd64:v1.5.2 - volumeMounts: - - mountPath: /data - name: influxdb-storage - volumes: - - name: influxdb-storage - emptyDir: {} - tolerations: - - key: node-role.kubernetes.io/master - effect: NoSchedule ---- -apiVersion: v1 -kind: Service -metadata: - labels: - task: monitoring - # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons) - # If you are NOT using this as an addon, you should comment out this line. - kubernetes.io/cluster-service: 'true' - kubernetes.io/name: monitoring-influxdb - name: monitoring-influxdb - namespace: kube-system -spec: - ports: - - port: 8086 - targetPort: 8086 - selector: - k8s-app: influxdb diff --git a/images/Kubernetes.png b/images/Kubernetes.png deleted file mode 100644 index c27f32b..0000000 Binary files a/images/Kubernetes.png and /dev/null differ diff --git a/images/dashboard-login.png b/images/dashboard-login.png deleted file mode 100644 index bedc016..0000000 Binary files a/images/dashboard-login.png and /dev/null differ diff --git a/images/dashboard.png b/images/dashboard.png deleted file mode 100644 index 2f2d8c2..0000000 Binary files a/images/dashboard.png and /dev/null differ diff --git a/images/grafana-app.png b/images/grafana-app.png deleted file mode 100644 index 9d2118b..0000000 Binary files a/images/grafana-app.png and /dev/null differ diff --git a/images/grafana-cluster.png b/images/grafana-cluster.png deleted file mode 100644 index 1a4696c..0000000 Binary files a/images/grafana-cluster.png and /dev/null differ diff --git a/images/grafana-datasource.png b/images/grafana-datasource.png deleted file mode 100644 index 7f2a522..0000000 Binary files a/images/grafana-datasource.png and /dev/null differ diff --git a/images/grafana-import.png b/images/grafana-import.png deleted file mode 100644 index b2aac03..0000000 Binary files a/images/grafana-import.png and /dev/null differ diff --git a/images/ha.png b/images/ha.png deleted file mode 100644 index 989f9d1..0000000 Binary files a/images/ha.png and /dev/null differ diff --git a/images/k8s-ha.png b/images/k8s-ha.png deleted file mode 100644 index 0b8362b..0000000 Binary files a/images/k8s-ha.png and /dev/null differ diff --git a/images/prometheus.png b/images/prometheus.png deleted file mode 100644 index a11abce..0000000 Binary files a/images/prometheus.png and /dev/null differ diff --git a/images/traefik.png b/images/traefik.png deleted file mode 100644 index 6a3ee54..0000000 Binary files a/images/traefik.png and /dev/null differ diff --git a/istio/crds.yaml b/istio/crds.yaml deleted file mode 100644 index acdf539..0000000 --- a/istio/crds.yaml +++ /dev/null @@ -1,1116 +0,0 @@ -# {{ if or .Values.global.crds (semverCompare ">=2.10.0-0" .Capabilities.TillerVersion.SemVer) }} -# these CRDs only make sense when pilot is enabled -# {{- if .Values.pilot.enabled }} -apiVersion: apiextensions.k8s.io/v1beta1 -kind: CustomResourceDefinition -metadata: - name: virtualservices.networking.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: istio-pilot -spec: - group: networking.istio.io - names: - kind: VirtualService - listKind: VirtualServiceList - plural: virtualservices - singular: virtualservice - categories: - - istio-io - - networking-istio-io - scope: Namespaced - version: v1alpha3 ---- -apiVersion: apiextensions.k8s.io/v1beta1 -kind: CustomResourceDefinition -metadata: - name: destinationrules.networking.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: istio-pilot -spec: - group: networking.istio.io - names: - kind: DestinationRule - listKind: DestinationRuleList - plural: destinationrules - singular: destinationrule - categories: - - istio-io - - networking-istio-io - scope: Namespaced - version: v1alpha3 ---- -apiVersion: apiextensions.k8s.io/v1beta1 -kind: CustomResourceDefinition -metadata: - name: serviceentries.networking.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: istio-pilot -spec: - group: networking.istio.io - names: - kind: ServiceEntry - listKind: ServiceEntryList - plural: serviceentries - singular: serviceentry - categories: - - istio-io - - networking-istio-io - scope: Namespaced - version: v1alpha3 ---- -apiVersion: apiextensions.k8s.io/v1beta1 -kind: CustomResourceDefinition -metadata: - name: gateways.networking.istio.io - annotations: - "helm.sh/hook": crd-install - "helm.sh/hook-weight": "-5" - labels: - app: istio-pilot -spec: - group: networking.istio.io - names: - kind: Gateway - plural: gateways - singular: gateway - categories: - - istio-io - - networking-istio-io - scope: Namespaced - version: v1alpha3 ---- -apiVersion: apiextensions.k8s.io/v1beta1 -kind: CustomResourceDefinition -metadata: - name: envoyfilters.networking.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: istio-pilot -spec: - group: networking.istio.io - names: - kind: EnvoyFilter - plural: envoyfilters - singular: envoyfilter - categories: - - istio-io - - networking-istio-io - scope: Namespaced - version: v1alpha3 ---- -# {{- end }} - -# these CRDs only make sense when security is enabled -# {{- if .Values.security.enabled }} -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - annotations: - "helm.sh/hook": crd-install - name: policies.authentication.istio.io -spec: - group: authentication.istio.io - names: - kind: Policy - plural: policies - singular: policy - categories: - - istio-io - - authentication-istio-io - scope: Namespaced - version: v1alpha1 ---- -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - annotations: - "helm.sh/hook": crd-install - name: meshpolicies.authentication.istio.io -spec: - group: authentication.istio.io - names: - kind: MeshPolicy - listKind: MeshPolicyList - plural: meshpolicies - singular: meshpolicy - categories: - - istio-io - - authentication-istio-io - scope: Cluster - version: v1alpha1 ---- -# {{- end }} - -# {{- if .Values.mixer.enabled }} -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - annotations: - "helm.sh/hook": crd-install - name: httpapispecbindings.config.istio.io -spec: - group: config.istio.io - names: - kind: HTTPAPISpecBinding - plural: httpapispecbindings - singular: httpapispecbinding - categories: - - istio-io - - apim-istio-io - scope: Namespaced - version: v1alpha2 ---- -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - annotations: - "helm.sh/hook": crd-install - name: httpapispecs.config.istio.io -spec: - group: config.istio.io - names: - kind: HTTPAPISpec - plural: httpapispecs - singular: httpapispec - categories: - - istio-io - - apim-istio-io - scope: Namespaced - version: v1alpha2 ---- -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - annotations: - "helm.sh/hook": crd-install - name: quotaspecbindings.config.istio.io -spec: - group: config.istio.io - names: - kind: QuotaSpecBinding - plural: quotaspecbindings - singular: quotaspecbinding - categories: - - istio-io - - apim-istio-io - scope: Namespaced - version: v1alpha2 ---- -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - annotations: - "helm.sh/hook": crd-install - name: quotaspecs.config.istio.io -spec: - group: config.istio.io - names: - kind: QuotaSpec - plural: quotaspecs - singular: quotaspec - categories: - - istio-io - - apim-istio-io - scope: Namespaced - version: v1alpha2 ---- - -# Mixer CRDs -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: rules.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: istio.io.mixer - istio: core -spec: - group: config.istio.io - names: - kind: rule - plural: rules - singular: rule - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: attributemanifests.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: istio.io.mixer - istio: core -spec: - group: config.istio.io - names: - kind: attributemanifest - plural: attributemanifests - singular: attributemanifest - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: bypasses.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: bypass - istio: mixer-adapter -spec: - group: config.istio.io - names: - kind: bypass - plural: bypasses - singular: bypass - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: circonuses.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: circonus - istio: mixer-adapter -spec: - group: config.istio.io - names: - kind: circonus - plural: circonuses - singular: circonus - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: deniers.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: denier - istio: mixer-adapter -spec: - group: config.istio.io - names: - kind: denier - plural: deniers - singular: denier - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: fluentds.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: fluentd - istio: mixer-adapter -spec: - group: config.istio.io - names: - kind: fluentd - plural: fluentds - singular: fluentd - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: kubernetesenvs.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: kubernetesenv - istio: mixer-adapter -spec: - group: config.istio.io - names: - kind: kubernetesenv - plural: kubernetesenvs - singular: kubernetesenv - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: listcheckers.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: listchecker - istio: mixer-adapter -spec: - group: config.istio.io - names: - kind: listchecker - plural: listcheckers - singular: listchecker - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: memquotas.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: memquota - istio: mixer-adapter -spec: - group: config.istio.io - names: - kind: memquota - plural: memquotas - singular: memquota - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: noops.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: noop - istio: mixer-adapter -spec: - group: config.istio.io - names: - kind: noop - plural: noops - singular: noop - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: opas.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: opa - istio: mixer-adapter -spec: - group: config.istio.io - names: - kind: opa - plural: opas - singular: opa - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: prometheuses.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: prometheus - istio: mixer-adapter -spec: - group: config.istio.io - names: - kind: prometheus - plural: prometheuses - singular: prometheus - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: rbacs.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: rbac - istio: mixer-adapter -spec: - group: config.istio.io - names: - kind: rbac - plural: rbacs - singular: rbac - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: redisquotas.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - package: redisquota - istio: mixer-adapter -spec: - group: config.istio.io - names: - kind: redisquota - plural: redisquotas - singular: redisquota - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: servicecontrols.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: servicecontrol - istio: mixer-adapter -spec: - group: config.istio.io - names: - kind: servicecontrol - plural: servicecontrols - singular: servicecontrol - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 - ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: signalfxs.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: signalfx - istio: mixer-adapter -spec: - group: config.istio.io - names: - kind: signalfx - plural: signalfxs - singular: signalfx - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: solarwindses.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: solarwinds - istio: mixer-adapter -spec: - group: config.istio.io - names: - kind: solarwinds - plural: solarwindses - singular: solarwinds - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: stackdrivers.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: stackdriver - istio: mixer-adapter -spec: - group: config.istio.io - names: - kind: stackdriver - plural: stackdrivers - singular: stackdriver - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: statsds.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: statsd - istio: mixer-adapter -spec: - group: config.istio.io - names: - kind: statsd - plural: statsds - singular: statsd - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: stdios.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: stdio - istio: mixer-adapter -spec: - group: config.istio.io - names: - kind: stdio - plural: stdios - singular: stdio - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: apikeys.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: apikey - istio: mixer-instance -spec: - group: config.istio.io - names: - kind: apikey - plural: apikeys - singular: apikey - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: authorizations.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: authorization - istio: mixer-instance -spec: - group: config.istio.io - names: - kind: authorization - plural: authorizations - singular: authorization - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: checknothings.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: checknothing - istio: mixer-instance -spec: - group: config.istio.io - names: - kind: checknothing - plural: checknothings - singular: checknothing - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: kuberneteses.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: adapter.template.kubernetes - istio: mixer-instance -spec: - group: config.istio.io - names: - kind: kubernetes - plural: kuberneteses - singular: kubernetes - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: listentries.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: listentry - istio: mixer-instance -spec: - group: config.istio.io - names: - kind: listentry - plural: listentries - singular: listentry - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: logentries.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: logentry - istio: mixer-instance -spec: - group: config.istio.io - names: - kind: logentry - plural: logentries - singular: logentry - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: edges.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: edge - istio: mixer-instance -spec: - group: config.istio.io - names: - kind: edge - plural: edges - singular: edge - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: metrics.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: metric - istio: mixer-instance -spec: - group: config.istio.io - names: - kind: metric - plural: metrics - singular: metric - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: quotas.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: quota - istio: mixer-instance -spec: - group: config.istio.io - names: - kind: quota - plural: quotas - singular: quota - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: reportnothings.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: reportnothing - istio: mixer-instance -spec: - group: config.istio.io - names: - kind: reportnothing - plural: reportnothings - singular: reportnothing - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: servicecontrolreports.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: servicecontrolreport - istio: mixer-instance -spec: - group: config.istio.io - names: - kind: servicecontrolreport - plural: servicecontrolreports - singular: servicecontrolreport - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: tracespans.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: tracespan - istio: mixer-instance -spec: - group: config.istio.io - names: - kind: tracespan - plural: tracespans - singular: tracespan - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: rbacconfigs.rbac.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: istio.io.mixer - istio: rbac -spec: - group: rbac.istio.io - names: - kind: RbacConfig - plural: rbacconfigs - singular: rbacconfig - categories: - - istio-io - - rbac-istio-io - scope: Namespaced - version: v1alpha1 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: serviceroles.rbac.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: istio.io.mixer - istio: rbac -spec: - group: rbac.istio.io - names: - kind: ServiceRole - plural: serviceroles - singular: servicerole - categories: - - istio-io - - rbac-istio-io - scope: Namespaced - version: v1alpha1 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: servicerolebindings.rbac.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: istio.io.mixer - istio: rbac -spec: - group: rbac.istio.io - names: - kind: ServiceRoleBinding - plural: servicerolebindings - singular: servicerolebinding - categories: - - istio-io - - rbac-istio-io - scope: Namespaced - version: v1alpha1 ---- -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: adapters.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: adapter - istio: mixer-adapter -spec: - group: config.istio.io - names: - kind: adapter - plural: adapters - singular: adapter - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: instances.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: instance - istio: mixer-instance -spec: - group: config.istio.io - names: - kind: instance - plural: instances - singular: instance - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: templates.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: template - istio: mixer-template -spec: - group: config.istio.io - names: - kind: template - plural: templates - singular: template - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: handlers.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: handler - istio: mixer-handler -spec: - group: config.istio.io - names: - kind: handler - plural: handlers - singular: handler - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- -# {{- end }} -# {{ end }} \ No newline at end of file diff --git a/istio/istio-demo.yaml b/istio/istio-demo.yaml deleted file mode 100644 index bbbeaa2..0000000 --- a/istio/istio-demo.yaml +++ /dev/null @@ -1,5134 +0,0 @@ -apiVersion: v1 -kind: Namespace -metadata: - name: istio-system - labels: - istio-injection: disabled ---- -# Source: istio/charts/galley/templates/configmap.yaml -apiVersion: v1 -kind: ConfigMap -metadata: - name: istio-galley-configuration - namespace: istio-system - labels: - app: istio-galley - chart: galley-1.0.0 - release: RELEASE-NAME - heritage: Tiller - istio: mixer -data: - validatingwebhookconfiguration.yaml: |- - apiVersion: admissionregistration.k8s.io/v1beta1 - kind: ValidatingWebhookConfiguration - metadata: - name: istio-galley - namespace: istio-system - labels: - app: istio-galley - chart: galley-1.0.0 - release: RELEASE-NAME - heritage: Tiller - webhooks: - - name: pilot.validation.istio.io - clientConfig: - service: - name: istio-galley - namespace: istio-system - path: "/admitpilot" - caBundle: "" - rules: - - operations: - - CREATE - - UPDATE - apiGroups: - - config.istio.io - apiVersions: - - v1alpha2 - resources: - - httpapispecs - - httpapispecbindings - - quotaspecs - - quotaspecbindings - - operations: - - CREATE - - UPDATE - apiGroups: - - rbac.istio.io - apiVersions: - - "*" - resources: - - "*" - - operations: - - CREATE - - UPDATE - apiGroups: - - authentication.istio.io - apiVersions: - - "*" - resources: - - "*" - - operations: - - CREATE - - UPDATE - apiGroups: - - networking.istio.io - apiVersions: - - "*" - resources: - - destinationrules - - envoyfilters - - gateways - # disabled per @costinm's request - # - serviceentries - - virtualservices - failurePolicy: Fail - - name: mixer.validation.istio.io - clientConfig: - service: - name: istio-galley - namespace: istio-system - path: "/admitmixer" - caBundle: "" - rules: - - operations: - - CREATE - - UPDATE - apiGroups: - - config.istio.io - apiVersions: - - v1alpha2 - resources: - - rules - - attributemanifests - - circonuses - - deniers - - fluentds - - kubernetesenvs - - listcheckers - - memquotas - - noops - - opas - - prometheuses - - rbacs - - servicecontrols - - solarwindses - - stackdrivers - - statsds - - stdios - - apikeys - - authorizations - - checknothings - # - kuberneteses - - listentries - - logentries - - metrics - - quotas - - reportnothings - - servicecontrolreports - - tracespans - failurePolicy: Fail - - ---- -# Source: istio/charts/grafana/templates/configmap.yaml -apiVersion: v1 -kind: ConfigMap -metadata: - name: istio-grafana-custom-resources - namespace: istio-system - labels: - app: istio-grafana - chart: grafana-0.1.0 - release: RELEASE-NAME - heritage: Tiller - istio: grafana -data: - custom-resources.yaml: |- - apiVersion: authentication.istio.io/v1alpha1 - kind: Policy - metadata: - name: grafana-ports-mtls-disabled - namespace: istio-system - spec: - targets: - - name: grafana - ports: - - number: 3000 - run.sh: |- - #!/bin/sh - - set -x - - if [ "$#" -ne "1" ]; then - echo "first argument should be path to custom resource yaml" - exit 1 - fi - - pathToResourceYAML=${1} - - /kubectl get validatingwebhookconfiguration istio-galley 2>/dev/null - if [ "$?" -eq 0 ]; then - echo "istio-galley validatingwebhookconfiguration found - waiting for istio-galley deployment to be ready" - while true; do - /kubectl -n istio-system get deployment istio-galley 2>/dev/null - if [ "$?" -eq 0 ]; then - break - fi - sleep 1 - done - /kubectl -n istio-system rollout status deployment istio-galley - if [ "$?" -ne 0 ]; then - echo "istio-galley deployment rollout status check failed" - exit 1 - fi - echo "istio-galley deployment ready for configuration validation" - fi - sleep 5 - /kubectl apply -f ${pathToResourceYAML} - - ---- -# Source: istio/charts/mixer/templates/configmap.yaml -apiVersion: v1 -kind: ConfigMap -metadata: - name: istio-statsd-prom-bridge - namespace: istio-system - labels: - app: istio-statsd-prom-bridge - chart: mixer-1.0.0 - release: RELEASE-NAME - heritage: Tiller - istio: mixer -data: - mapping.conf: |- - ---- -# Source: istio/charts/prometheus/templates/configmap.yaml -apiVersion: v1 -kind: ConfigMap -metadata: - name: prometheus - namespace: istio-system - labels: - app: prometheus - chart: prometheus-0.1.0 - release: RELEASE-NAME - heritage: Tiller -data: - prometheus.yml: |- - global: - scrape_interval: 15s - scrape_configs: - - - job_name: 'istio-mesh' - # Override the global default and scrape targets from this job every 5 seconds. - scrape_interval: 5s - - kubernetes_sd_configs: - - role: endpoints - namespaces: - names: - - istio-system - - relabel_configs: - - source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name] - action: keep - regex: istio-telemetry;prometheus - - - job_name: 'envoy' - # Override the global default and scrape targets from this job every 5 seconds. - scrape_interval: 5s - # metrics_path defaults to '/metrics' - # scheme defaults to 'http'. - - kubernetes_sd_configs: - - role: endpoints - namespaces: - names: - - istio-system - - relabel_configs: - - source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name] - action: keep - regex: istio-statsd-prom-bridge;statsd-prom - - - job_name: 'istio-policy' - # Override the global default and scrape targets from this job every 5 seconds. - scrape_interval: 5s - # metrics_path defaults to '/metrics' - # scheme defaults to 'http'. - - kubernetes_sd_configs: - - role: endpoints - namespaces: - names: - - istio-system - - - relabel_configs: - - source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name] - action: keep - regex: istio-policy;http-monitoring - - - job_name: 'istio-telemetry' - # Override the global default and scrape targets from this job every 5 seconds. - scrape_interval: 5s - # metrics_path defaults to '/metrics' - # scheme defaults to 'http'. - - kubernetes_sd_configs: - - role: endpoints - namespaces: - names: - - istio-system - - relabel_configs: - - source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name] - action: keep - regex: istio-telemetry;http-monitoring - - - job_name: 'pilot' - # Override the global default and scrape targets from this job every 5 seconds. - scrape_interval: 5s - # metrics_path defaults to '/metrics' - # scheme defaults to 'http'. - - kubernetes_sd_configs: - - role: endpoints - namespaces: - names: - - istio-system - - relabel_configs: - - source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name] - action: keep - regex: istio-pilot;http-monitoring - - - job_name: 'galley' - # Override the global default and scrape targets from this job every 5 seconds. - scrape_interval: 5s - # metrics_path defaults to '/metrics' - # scheme defaults to 'http'. - - kubernetes_sd_configs: - - role: endpoints - namespaces: - names: - - istio-system - - relabel_configs: - - source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name] - action: keep - regex: istio-galley;http-monitoring - - # scrape config for API servers - - job_name: 'kubernetes-apiservers' - kubernetes_sd_configs: - - role: endpoints - namespaces: - names: - - default - scheme: https - tls_config: - ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt - bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token - relabel_configs: - - source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name] - action: keep - regex: kubernetes;https - - # scrape config for nodes (kubelet) - - job_name: 'kubernetes-nodes' - scheme: https - tls_config: - ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt - bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token - kubernetes_sd_configs: - - role: node - relabel_configs: - - action: labelmap - regex: __meta_kubernetes_node_label_(.+) - - target_label: __address__ - replacement: kubernetes.default.svc:443 - - source_labels: [__meta_kubernetes_node_name] - regex: (.+) - target_label: __metrics_path__ - replacement: /api/v1/nodes/${1}/proxy/metrics - - # Scrape config for Kubelet cAdvisor. - # - # This is required for Kubernetes 1.7.3 and later, where cAdvisor metrics - # (those whose names begin with 'container_') have been removed from the - # Kubelet metrics endpoint. This job scrapes the cAdvisor endpoint to - # retrieve those metrics. - # - # In Kubernetes 1.7.0-1.7.2, these metrics are only exposed on the cAdvisor - # HTTP endpoint; use "replacement: /api/v1/nodes/${1}:4194/proxy/metrics" - # in that case (and ensure cAdvisor's HTTP server hasn't been disabled with - # the --cadvisor-port=0 Kubelet flag). - # - # This job is not necessary and should be removed in Kubernetes 1.6 and - # earlier versions, or it will cause the metrics to be scraped twice. - - job_name: 'kubernetes-cadvisor' - scheme: https - tls_config: - ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt - bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token - kubernetes_sd_configs: - - role: node - relabel_configs: - - action: labelmap - regex: __meta_kubernetes_node_label_(.+) - - target_label: __address__ - replacement: kubernetes.default.svc:443 - - source_labels: [__meta_kubernetes_node_name] - regex: (.+) - target_label: __metrics_path__ - replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor - - # scrape config for service endpoints. - - job_name: 'kubernetes-service-endpoints' - kubernetes_sd_configs: - - role: endpoints - relabel_configs: - - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape] - action: keep - regex: true - - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme] - action: replace - target_label: __scheme__ - regex: (https?) - - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] - action: replace - target_label: __metrics_path__ - regex: (.+) - - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] - action: replace - target_label: __address__ - regex: ([^:]+)(?::\d+)?;(\d+) - replacement: $1:$2 - - action: labelmap - regex: __meta_kubernetes_service_label_(.+) - - source_labels: [__meta_kubernetes_namespace] - action: replace - target_label: kubernetes_namespace - - source_labels: [__meta_kubernetes_service_name] - action: replace - target_label: kubernetes_name - - # Example scrape config for pods - - job_name: 'kubernetes-pods' - kubernetes_sd_configs: - - role: pod - - relabel_configs: - - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape] - action: keep - regex: true - - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path] - action: replace - target_label: __metrics_path__ - regex: (.+) - - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port] - action: replace - regex: ([^:]+)(?::\d+)?;(\d+) - replacement: $1:$2 - target_label: __address__ - - action: labelmap - regex: __meta_kubernetes_pod_label_(.+) - - source_labels: [__meta_kubernetes_namespace] - action: replace - target_label: namespace - - source_labels: [__meta_kubernetes_pod_name] - action: replace - target_label: pod_name - ---- -# Source: istio/charts/security/templates/configmap.yaml -apiVersion: v1 -kind: ConfigMap -metadata: - name: istio-security-custom-resources - namespace: istio-system - labels: - app: istio-security - chart: security-1.0.0 - release: RELEASE-NAME - heritage: Tiller - istio: security -data: - custom-resources.yaml: |- - run.sh: |- - #!/bin/sh - - set -x - - if [ "$#" -ne "1" ]; then - echo "first argument should be path to custom resource yaml" - exit 1 - fi - - pathToResourceYAML=${1} - - /kubectl get validatingwebhookconfiguration istio-galley 2>/dev/null - if [ "$?" -eq 0 ]; then - echo "istio-galley validatingwebhookconfiguration found - waiting for istio-galley deployment to be ready" - while true; do - /kubectl -n istio-system get deployment istio-galley 2>/dev/null - if [ "$?" -eq 0 ]; then - break - fi - sleep 1 - done - /kubectl -n istio-system rollout status deployment istio-galley - if [ "$?" -ne 0 ]; then - echo "istio-galley deployment rollout status check failed" - exit 1 - fi - echo "istio-galley deployment ready for configuration validation" - fi - sleep 5 - /kubectl apply -f ${pathToResourceYAML} - - ---- -# Source: istio/templates/configmap.yaml - -apiVersion: v1 -kind: ConfigMap -metadata: - name: istio - namespace: istio-system - labels: - app: istio - chart: istio-1.0.0 - release: RELEASE-NAME - heritage: Tiller -data: - mesh: |- - # Set the following variable to true to disable policy checks by the Mixer. - # Note that metrics will still be reported to the Mixer. - disablePolicyChecks: false - - # Set enableTracing to false to disable request tracing. - enableTracing: true - - # Set accessLogFile to empty string to disable access log. - accessLogFile: "/dev/stdout" - # - # Deprecated: mixer is using EDS - mixerCheckServer: istio-policy.istio-system.svc.cluster.local:9091 - mixerReportServer: istio-telemetry.istio-system.svc.cluster.local:9091 - - # Unix Domain Socket through which envoy communicates with NodeAgent SDS to get - # key/cert for mTLS. Use secret-mount files instead of SDS if set to empty. - sdsUdsPath: "" - - # How frequently should Envoy fetch key/cert from NodeAgent. - sdsRefreshDelay: 15s - - # - defaultConfig: - # - # TCP connection timeout between Envoy & the application, and between Envoys. - connectTimeout: 10s - # - ### ADVANCED SETTINGS ############# - # Where should envoy's configuration be stored in the istio-proxy container - configPath: "/etc/istio/proxy" - binaryPath: "/usr/local/bin/envoy" - # The pseudo service name used for Envoy. - serviceCluster: istio-proxy - # These settings that determine how long an old Envoy - # process should be kept alive after an occasional reload. - drainDuration: 45s - parentShutdownDuration: 1m0s - # - # The mode used to redirect inbound connections to Envoy. This setting - # has no effect on outbound traffic: iptables REDIRECT is always used for - # outbound connections. - # If "REDIRECT", use iptables REDIRECT to NAT and redirect to Envoy. - # The "REDIRECT" mode loses source addresses during redirection. - # If "TPROXY", use iptables TPROXY to redirect to Envoy. - # The "TPROXY" mode preserves both the source and destination IP - # addresses and ports, so that they can be used for advanced filtering - # and manipulation. - # The "TPROXY" mode also configures the sidecar to run with the - # CAP_NET_ADMIN capability, which is required to use TPROXY. - #interceptionMode: REDIRECT - # - # Port where Envoy listens (on local host) for admin commands - # You can exec into the istio-proxy container in a pod and - # curl the admin port (curl http://localhost:15000/) to obtain - # diagnostic information from Envoy. See - # https://lyft.github.io/envoy/docs/operations/admin.html - # for more details - proxyAdminPort: 15000 - # - # Zipkin trace collector - zipkinAddress: zipkin.istio-system:9411 - # - # Statsd metrics collector converts statsd metrics into Prometheus metrics. - statsdUdpAddress: istio-statsd-prom-bridge.istio-system:9125 - # - # Mutual TLS authentication between sidecars and istio control plane. - controlPlaneAuthPolicy: NONE - # - # Address where istio Pilot service is running - discoveryAddress: istio-pilot.istio-system:15007 - ---- -# Source: istio/templates/sidecar-injector-configmap.yaml - -apiVersion: v1 -kind: ConfigMap -metadata: - name: istio-sidecar-injector - namespace: istio-system - labels: - app: istio - chart: istio-1.0.0 - release: RELEASE-NAME - heritage: Tiller - istio: sidecar-injector -data: - config: |- - policy: enabled - template: |- - initContainers: - - name: istio-init - image: "gcr.io/istio-release/proxy_init:1.0.0" - args: - - "-p" - - [[ .MeshConfig.ProxyListenPort ]] - - "-u" - - 1337 - - "-m" - - [[ or (index .ObjectMeta.Annotations "sidecar.istio.io/interceptionMode") .ProxyConfig.InterceptionMode.String ]] - - "-i" - [[ if (isset .ObjectMeta.Annotations "traffic.sidecar.istio.io/includeOutboundIPRanges") -]] - - "[[ index .ObjectMeta.Annotations "traffic.sidecar.istio.io/includeOutboundIPRanges" ]]" - [[ else -]] - - "*" - [[ end -]] - - "-x" - [[ if (isset .ObjectMeta.Annotations "traffic.sidecar.istio.io/excludeOutboundIPRanges") -]] - - "[[ index .ObjectMeta.Annotations "traffic.sidecar.istio.io/excludeOutboundIPRanges" ]]" - [[ else -]] - - "" - [[ end -]] - - "-b" - [[ if (isset .ObjectMeta.Annotations "traffic.sidecar.istio.io/includeInboundPorts") -]] - - "[[ index .ObjectMeta.Annotations "traffic.sidecar.istio.io/includeInboundPorts" ]]" - [[ else -]] - - [[ range .Spec.Containers -]][[ range .Ports -]][[ .ContainerPort -]], [[ end -]][[ end -]][[ end]] - - "-d" - [[ if (isset .ObjectMeta.Annotations "traffic.sidecar.istio.io/excludeInboundPorts") -]] - - "[[ index .ObjectMeta.Annotations "traffic.sidecar.istio.io/excludeInboundPorts" ]]" - [[ else -]] - - "" - [[ end -]] - imagePullPolicy: IfNotPresent - securityContext: - capabilities: - add: - - NET_ADMIN - privileged: true - restartPolicy: Always - - containers: - - name: istio-proxy - image: [[ if (isset .ObjectMeta.Annotations "sidecar.istio.io/proxyImage") -]] - "[[ index .ObjectMeta.Annotations "sidecar.istio.io/proxyImage" ]]" - [[ else -]] - gcr.io/istio-release/proxyv2:1.0.0 - [[ end -]] - args: - - proxy - - sidecar - - --configPath - - [[ .ProxyConfig.ConfigPath ]] - - --binaryPath - - [[ .ProxyConfig.BinaryPath ]] - - --serviceCluster - [[ if ne "" (index .ObjectMeta.Labels "app") -]] - - [[ index .ObjectMeta.Labels "app" ]] - [[ else -]] - - "istio-proxy" - [[ end -]] - - --drainDuration - - [[ formatDuration .ProxyConfig.DrainDuration ]] - - --parentShutdownDuration - - [[ formatDuration .ProxyConfig.ParentShutdownDuration ]] - - --discoveryAddress - - [[ .ProxyConfig.DiscoveryAddress ]] - - --discoveryRefreshDelay - - [[ formatDuration .ProxyConfig.DiscoveryRefreshDelay ]] - - --zipkinAddress - - [[ .ProxyConfig.ZipkinAddress ]] - - --connectTimeout - - [[ formatDuration .ProxyConfig.ConnectTimeout ]] - - --statsdUdpAddress - - [[ .ProxyConfig.StatsdUdpAddress ]] - - --proxyAdminPort - - [[ .ProxyConfig.ProxyAdminPort ]] - - --controlPlaneAuthPolicy - - [[ or (index .ObjectMeta.Annotations "sidecar.istio.io/controlPlaneAuthPolicy") .ProxyConfig.ControlPlaneAuthPolicy ]] - env: - - name: POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: POD_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - - name: INSTANCE_IP - valueFrom: - fieldRef: - fieldPath: status.podIP - - name: ISTIO_META_POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: ISTIO_META_INTERCEPTION_MODE - value: [[ or (index .ObjectMeta.Annotations "sidecar.istio.io/interceptionMode") .ProxyConfig.InterceptionMode.String ]] - imagePullPolicy: IfNotPresent - securityContext: - privileged: false - readOnlyRootFilesystem: true - [[ if eq (or (index .ObjectMeta.Annotations "sidecar.istio.io/interceptionMode") .ProxyConfig.InterceptionMode.String) "TPROXY" -]] - capabilities: - add: - - NET_ADMIN - runAsGroup: 1337 - [[ else -]] - runAsUser: 1337 - [[ end -]] - restartPolicy: Always - resources: - [[ if (isset .ObjectMeta.Annotations "sidecar.istio.io/proxyCPU") -]] - requests: - cpu: "[[ index .ObjectMeta.Annotations "sidecar.istio.io/proxyCPU" ]]" - memory: "[[ index .ObjectMeta.Annotations "sidecar.istio.io/proxyMemory" ]]" - [[ else -]] - requests: - cpu: 10m - - [[ end -]] - volumeMounts: - - mountPath: /etc/istio/proxy - name: istio-envoy - - mountPath: /etc/certs/ - name: istio-certs - readOnly: true - volumes: - - emptyDir: - medium: Memory - name: istio-envoy - - name: istio-certs - secret: - optional: true - [[ if eq .Spec.ServiceAccountName "" -]] - secretName: istio.default - [[ else -]] - secretName: [[ printf "istio.%s" .Spec.ServiceAccountName ]] - [[ end -]] - ---- -# Source: istio/charts/galley/templates/serviceaccount.yaml -apiVersion: v1 -kind: ServiceAccount -metadata: - name: istio-galley-service-account - namespace: istio-system - labels: - app: istio-galley - chart: galley-1.0.0 - heritage: Tiller - release: RELEASE-NAME - ---- -# Source: istio/charts/gateways/templates/serviceaccount.yaml - -apiVersion: v1 -kind: ServiceAccount -metadata: - name: istio-egressgateway-service-account - namespace: istio-system - labels: - app: egressgateway - chart: gateways-1.0.0 - heritage: Tiller - release: RELEASE-NAME ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: istio-ingressgateway-service-account - namespace: istio-system - labels: - app: ingressgateway - chart: gateways-1.0.0 - heritage: Tiller - release: RELEASE-NAME ---- - ---- -# Source: istio/charts/grafana/templates/create-custom-resources-job.yaml -apiVersion: v1 -kind: ServiceAccount -metadata: - name: istio-grafana-post-install-account - namespace: istio-system - labels: - app: istio-grafana - chart: grafana-0.1.0 - heritage: Tiller - release: RELEASE-NAME ---- -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRole -metadata: - name: istio-grafana-post-install-istio-system - labels: - app: istio-grafana - chart: grafana-0.1.0 - heritage: Tiller - release: RELEASE-NAME -rules: -- apiGroups: ["authentication.istio.io"] # needed to create default authn policy - resources: ["*"] - verbs: ["*"] ---- -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRoleBinding -metadata: - name: istio-grafana-post-install-role-binding-istio-system - labels: - app: istio-grafana - chart: grafana-0.1.0 - heritage: Tiller - release: RELEASE-NAME -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: istio-grafana-post-install-istio-system -subjects: - - kind: ServiceAccount - name: istio-grafana-post-install-account - namespace: istio-system ---- -apiVersion: batch/v1 -kind: Job -metadata: - name: istio-grafana-post-install - namespace: istio-system - annotations: - "helm.sh/hook": post-install - "helm.sh/hook-delete-policy": hook-succeeded - labels: - app: istio-grafana - chart: grafana-0.1.0 - release: RELEASE-NAME - heritage: Tiller -spec: - template: - metadata: - name: istio-grafana-post-install - labels: - app: istio-grafana - release: RELEASE-NAME - spec: - serviceAccountName: istio-grafana-post-install-account - containers: - - name: hyperkube - image: "quay.io/coreos/hyperkube:v1.7.6_coreos.0" - command: [ "/bin/bash", "/tmp/grafana/run.sh", "/tmp/grafana/custom-resources.yaml" ] - volumeMounts: - - mountPath: "/tmp/grafana" - name: tmp-configmap-grafana - volumes: - - name: tmp-configmap-grafana - configMap: - name: istio-grafana-custom-resources - restartPolicy: OnFailure - ---- -# Source: istio/charts/mixer/templates/serviceaccount.yaml -apiVersion: v1 -kind: ServiceAccount -metadata: - name: istio-mixer-service-account - namespace: istio-system - labels: - app: mixer - chart: mixer-1.0.0 - heritage: Tiller - release: RELEASE-NAME - ---- -# Source: istio/charts/pilot/templates/serviceaccount.yaml -apiVersion: v1 -kind: ServiceAccount -metadata: - name: istio-pilot-service-account - namespace: istio-system - labels: - app: istio-pilot - chart: pilot-1.0.0 - heritage: Tiller - release: RELEASE-NAME - ---- -# Source: istio/charts/prometheus/templates/serviceaccount.yaml -apiVersion: v1 -kind: ServiceAccount -metadata: - name: prometheus - namespace: istio-system - ---- -# Source: istio/charts/security/templates/cleanup-secrets.yaml -# The reason for creating a ServiceAccount and ClusterRole specifically for this -# post-delete hooked job is because the citadel ServiceAccount is being deleted -# before this hook is launched. On the other hand, running this hook before the -# deletion of the citadel (e.g. pre-delete) won't delete the secrets because they -# will be re-created immediately by the to-be-deleted citadel. -# -# It's also important that the ServiceAccount, ClusterRole and ClusterRoleBinding -# will be ready before running the hooked Job therefore the hook weights. - -apiVersion: v1 -kind: ServiceAccount -metadata: - name: istio-cleanup-secrets-service-account - namespace: istio-system - annotations: - "helm.sh/hook": post-delete - "helm.sh/hook-delete-policy": hook-succeeded - "helm.sh/hook-weight": "1" - labels: - app: security - chart: security-1.0.0 - heritage: Tiller - release: RELEASE-NAME ---- -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRole -metadata: - name: istio-cleanup-secrets-istio-system - annotations: - "helm.sh/hook": post-delete - "helm.sh/hook-delete-policy": hook-succeeded - "helm.sh/hook-weight": "1" - labels: - app: security - chart: security-1.0.0 - heritage: Tiller - release: RELEASE-NAME -rules: -- apiGroups: [""] - resources: ["secrets"] - verbs: ["list", "delete"] ---- -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRoleBinding -metadata: - name: istio-cleanup-secrets-istio-system - annotations: - "helm.sh/hook": post-delete - "helm.sh/hook-delete-policy": hook-succeeded - "helm.sh/hook-weight": "2" - labels: - app: security - chart: security-1.0.0 - heritage: Tiller - release: RELEASE-NAME -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: istio-cleanup-secrets-istio-system -subjects: - - kind: ServiceAccount - name: istio-cleanup-secrets-service-account - namespace: istio-system ---- -apiVersion: batch/v1 -kind: Job -metadata: - name: istio-cleanup-secrets - namespace: istio-system - annotations: - "helm.sh/hook": post-delete - "helm.sh/hook-delete-policy": hook-succeeded - "helm.sh/hook-weight": "3" - labels: - app: security - chart: security-1.0.0 - release: RELEASE-NAME - heritage: Tiller -spec: - template: - metadata: - name: istio-cleanup-secrets - labels: - app: security - release: RELEASE-NAME - spec: - serviceAccountName: istio-cleanup-secrets-service-account - containers: - - name: hyperkube - image: "quay.io/coreos/hyperkube:v1.7.6_coreos.0" - command: - - /bin/bash - - -c - - > - kubectl get secret --all-namespaces | grep "istio.io/key-and-cert" | while read -r entry; do - ns=$(echo $entry | awk '{print $1}'); - name=$(echo $entry | awk '{print $2}'); - kubectl delete secret $name -n $ns; - done - restartPolicy: OnFailure - ---- -# Source: istio/charts/security/templates/serviceaccount.yaml -apiVersion: v1 -kind: ServiceAccount -metadata: - name: istio-citadel-service-account - namespace: istio-system - labels: - app: security - chart: security-1.0.0 - heritage: Tiller - release: RELEASE-NAME - ---- -# Source: istio/charts/sidecarInjectorWebhook/templates/serviceaccount.yaml -apiVersion: v1 -kind: ServiceAccount -metadata: - name: istio-sidecar-injector-service-account - namespace: istio-system - labels: - app: istio-sidecar-injector - chart: sidecarInjectorWebhook-1.0.0 - heritage: Tiller - release: RELEASE-NAME - ---- -# Source: istio/templates/crds.yaml -# -# these CRDs only make sense when pilot is enabled -# -apiVersion: apiextensions.k8s.io/v1beta1 -kind: CustomResourceDefinition -metadata: - name: virtualservices.networking.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: istio-pilot -spec: - group: networking.istio.io - names: - kind: VirtualService - listKind: VirtualServiceList - plural: virtualservices - singular: virtualservice - categories: - - istio-io - - networking-istio-io - scope: Namespaced - version: v1alpha3 ---- -apiVersion: apiextensions.k8s.io/v1beta1 -kind: CustomResourceDefinition -metadata: - name: destinationrules.networking.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: istio-pilot -spec: - group: networking.istio.io - names: - kind: DestinationRule - listKind: DestinationRuleList - plural: destinationrules - singular: destinationrule - categories: - - istio-io - - networking-istio-io - scope: Namespaced - version: v1alpha3 ---- -apiVersion: apiextensions.k8s.io/v1beta1 -kind: CustomResourceDefinition -metadata: - name: serviceentries.networking.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: istio-pilot -spec: - group: networking.istio.io - names: - kind: ServiceEntry - listKind: ServiceEntryList - plural: serviceentries - singular: serviceentry - categories: - - istio-io - - networking-istio-io - scope: Namespaced - version: v1alpha3 ---- -apiVersion: apiextensions.k8s.io/v1beta1 -kind: CustomResourceDefinition -metadata: - name: gateways.networking.istio.io - annotations: - "helm.sh/hook": crd-install - "helm.sh/hook-weight": "-5" - labels: - app: istio-pilot -spec: - group: networking.istio.io - names: - kind: Gateway - plural: gateways - singular: gateway - categories: - - istio-io - - networking-istio-io - scope: Namespaced - version: v1alpha3 ---- -apiVersion: apiextensions.k8s.io/v1beta1 -kind: CustomResourceDefinition -metadata: - name: envoyfilters.networking.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: istio-pilot -spec: - group: networking.istio.io - names: - kind: EnvoyFilter - plural: envoyfilters - singular: envoyfilter - categories: - - istio-io - - networking-istio-io - scope: Namespaced - version: v1alpha3 ---- -# - -# these CRDs only make sense when security is enabled -# - -# -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - annotations: - "helm.sh/hook": crd-install - name: httpapispecbindings.config.istio.io -spec: - group: config.istio.io - names: - kind: HTTPAPISpecBinding - plural: httpapispecbindings - singular: httpapispecbinding - categories: - - istio-io - - apim-istio-io - scope: Namespaced - version: v1alpha2 ---- -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - annotations: - "helm.sh/hook": crd-install - name: httpapispecs.config.istio.io -spec: - group: config.istio.io - names: - kind: HTTPAPISpec - plural: httpapispecs - singular: httpapispec - categories: - - istio-io - - apim-istio-io - scope: Namespaced - version: v1alpha2 ---- -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - annotations: - "helm.sh/hook": crd-install - name: quotaspecbindings.config.istio.io -spec: - group: config.istio.io - names: - kind: QuotaSpecBinding - plural: quotaspecbindings - singular: quotaspecbinding - categories: - - istio-io - - apim-istio-io - scope: Namespaced - version: v1alpha2 ---- -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - annotations: - "helm.sh/hook": crd-install - name: quotaspecs.config.istio.io -spec: - group: config.istio.io - names: - kind: QuotaSpec - plural: quotaspecs - singular: quotaspec - categories: - - istio-io - - apim-istio-io - scope: Namespaced - version: v1alpha2 ---- - -# Mixer CRDs -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: rules.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: istio.io.mixer - istio: core -spec: - group: config.istio.io - names: - kind: rule - plural: rules - singular: rule - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: attributemanifests.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: istio.io.mixer - istio: core -spec: - group: config.istio.io - names: - kind: attributemanifest - plural: attributemanifests - singular: attributemanifest - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: bypasses.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: bypass - istio: mixer-adapter -spec: - group: config.istio.io - names: - kind: bypass - plural: bypasses - singular: bypass - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: circonuses.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: circonus - istio: mixer-adapter -spec: - group: config.istio.io - names: - kind: circonus - plural: circonuses - singular: circonus - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: deniers.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: denier - istio: mixer-adapter -spec: - group: config.istio.io - names: - kind: denier - plural: deniers - singular: denier - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: fluentds.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: fluentd - istio: mixer-adapter -spec: - group: config.istio.io - names: - kind: fluentd - plural: fluentds - singular: fluentd - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: kubernetesenvs.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: kubernetesenv - istio: mixer-adapter -spec: - group: config.istio.io - names: - kind: kubernetesenv - plural: kubernetesenvs - singular: kubernetesenv - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: listcheckers.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: listchecker - istio: mixer-adapter -spec: - group: config.istio.io - names: - kind: listchecker - plural: listcheckers - singular: listchecker - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: memquotas.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: memquota - istio: mixer-adapter -spec: - group: config.istio.io - names: - kind: memquota - plural: memquotas - singular: memquota - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: noops.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: noop - istio: mixer-adapter -spec: - group: config.istio.io - names: - kind: noop - plural: noops - singular: noop - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: opas.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: opa - istio: mixer-adapter -spec: - group: config.istio.io - names: - kind: opa - plural: opas - singular: opa - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: prometheuses.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: prometheus - istio: mixer-adapter -spec: - group: config.istio.io - names: - kind: prometheus - plural: prometheuses - singular: prometheus - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: rbacs.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: rbac - istio: mixer-adapter -spec: - group: config.istio.io - names: - kind: rbac - plural: rbacs - singular: rbac - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: redisquotas.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - package: redisquota - istio: mixer-adapter -spec: - group: config.istio.io - names: - kind: redisquota - plural: redisquotas - singular: redisquota - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: servicecontrols.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: servicecontrol - istio: mixer-adapter -spec: - group: config.istio.io - names: - kind: servicecontrol - plural: servicecontrols - singular: servicecontrol - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 - ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: signalfxs.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: signalfx - istio: mixer-adapter -spec: - group: config.istio.io - names: - kind: signalfx - plural: signalfxs - singular: signalfx - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: solarwindses.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: solarwinds - istio: mixer-adapter -spec: - group: config.istio.io - names: - kind: solarwinds - plural: solarwindses - singular: solarwinds - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: stackdrivers.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: stackdriver - istio: mixer-adapter -spec: - group: config.istio.io - names: - kind: stackdriver - plural: stackdrivers - singular: stackdriver - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: statsds.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: statsd - istio: mixer-adapter -spec: - group: config.istio.io - names: - kind: statsd - plural: statsds - singular: statsd - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: stdios.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: stdio - istio: mixer-adapter -spec: - group: config.istio.io - names: - kind: stdio - plural: stdios - singular: stdio - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: apikeys.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: apikey - istio: mixer-instance -spec: - group: config.istio.io - names: - kind: apikey - plural: apikeys - singular: apikey - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: authorizations.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: authorization - istio: mixer-instance -spec: - group: config.istio.io - names: - kind: authorization - plural: authorizations - singular: authorization - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: checknothings.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: checknothing - istio: mixer-instance -spec: - group: config.istio.io - names: - kind: checknothing - plural: checknothings - singular: checknothing - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: kuberneteses.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: adapter.template.kubernetes - istio: mixer-instance -spec: - group: config.istio.io - names: - kind: kubernetes - plural: kuberneteses - singular: kubernetes - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: listentries.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: listentry - istio: mixer-instance -spec: - group: config.istio.io - names: - kind: listentry - plural: listentries - singular: listentry - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: logentries.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: logentry - istio: mixer-instance -spec: - group: config.istio.io - names: - kind: logentry - plural: logentries - singular: logentry - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: edges.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: edge - istio: mixer-instance -spec: - group: config.istio.io - names: - kind: edge - plural: edges - singular: edge - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: metrics.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: metric - istio: mixer-instance -spec: - group: config.istio.io - names: - kind: metric - plural: metrics - singular: metric - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: quotas.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: quota - istio: mixer-instance -spec: - group: config.istio.io - names: - kind: quota - plural: quotas - singular: quota - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: reportnothings.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: reportnothing - istio: mixer-instance -spec: - group: config.istio.io - names: - kind: reportnothing - plural: reportnothings - singular: reportnothing - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: servicecontrolreports.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: servicecontrolreport - istio: mixer-instance -spec: - group: config.istio.io - names: - kind: servicecontrolreport - plural: servicecontrolreports - singular: servicecontrolreport - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: tracespans.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: tracespan - istio: mixer-instance -spec: - group: config.istio.io - names: - kind: tracespan - plural: tracespans - singular: tracespan - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: rbacconfigs.rbac.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: istio.io.mixer - istio: rbac -spec: - group: rbac.istio.io - names: - kind: RbacConfig - plural: rbacconfigs - singular: rbacconfig - categories: - - istio-io - - rbac-istio-io - scope: Namespaced - version: v1alpha1 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: serviceroles.rbac.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: istio.io.mixer - istio: rbac -spec: - group: rbac.istio.io - names: - kind: ServiceRole - plural: serviceroles - singular: servicerole - categories: - - istio-io - - rbac-istio-io - scope: Namespaced - version: v1alpha1 ---- - -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: servicerolebindings.rbac.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: istio.io.mixer - istio: rbac -spec: - group: rbac.istio.io - names: - kind: ServiceRoleBinding - plural: servicerolebindings - singular: servicerolebinding - categories: - - istio-io - - rbac-istio-io - scope: Namespaced - version: v1alpha1 ---- -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: adapters.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: adapter - istio: mixer-adapter -spec: - group: config.istio.io - names: - kind: adapter - plural: adapters - singular: adapter - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: instances.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: instance - istio: mixer-instance -spec: - group: config.istio.io - names: - kind: instance - plural: instances - singular: instance - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: templates.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: template - istio: mixer-template -spec: - group: config.istio.io - names: - kind: template - plural: templates - singular: template - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- -kind: CustomResourceDefinition -apiVersion: apiextensions.k8s.io/v1beta1 -metadata: - name: handlers.config.istio.io - annotations: - "helm.sh/hook": crd-install - labels: - app: mixer - package: handler - istio: mixer-handler -spec: - group: config.istio.io - names: - kind: handler - plural: handlers - singular: handler - categories: - - istio-io - - policy-istio-io - scope: Namespaced - version: v1alpha2 ---- -# -# ---- -# Source: istio/charts/galley/templates/clusterrole.yaml -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRole -metadata: - name: istio-galley-istio-system - labels: - app: istio-galley - chart: galley-1.0.0 - heritage: Tiller - release: RELEASE-NAME -rules: -- apiGroups: ["admissionregistration.k8s.io"] - resources: ["validatingwebhookconfigurations"] - verbs: ["*"] -- apiGroups: ["config.istio.io"] # istio mixer CRD watcher - resources: ["*"] - verbs: ["get", "list", "watch"] -- apiGroups: ["*"] - resources: ["deployments"] - resourceNames: ["istio-galley"] - verbs: ["get"] - ---- -# Source: istio/charts/gateways/templates/clusterrole.yaml - -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRole -metadata: - labels: - app: gateways - chart: gateways-1.0.0 - heritage: Tiller - release: RELEASE-NAME - name: istio-egressgateway-istio-system -rules: -- apiGroups: ["extensions"] - resources: ["thirdpartyresources", "virtualservices", "destinationrules", "gateways"] - verbs: ["get", "watch", "list", "update"] ---- -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRole -metadata: - labels: - app: gateways - chart: gateways-1.0.0 - heritage: Tiller - release: RELEASE-NAME - name: istio-ingressgateway-istio-system -rules: -- apiGroups: ["extensions"] - resources: ["thirdpartyresources", "virtualservices", "destinationrules", "gateways"] - verbs: ["get", "watch", "list", "update"] ---- - ---- -# Source: istio/charts/mixer/templates/clusterrole.yaml -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRole -metadata: - name: istio-mixer-istio-system - labels: - app: mixer - chart: mixer-1.0.0 - heritage: Tiller - release: RELEASE-NAME -rules: -- apiGroups: ["config.istio.io"] # istio CRD watcher - resources: ["*"] - verbs: ["create", "get", "list", "watch", "patch"] -- apiGroups: ["rbac.istio.io"] # istio RBAC watcher - resources: ["*"] - verbs: ["get", "list", "watch"] -- apiGroups: ["apiextensions.k8s.io"] - resources: ["customresourcedefinitions"] - verbs: ["get", "list", "watch"] -- apiGroups: [""] - resources: ["configmaps", "endpoints", "pods", "services", "namespaces", "secrets"] - verbs: ["get", "list", "watch"] -- apiGroups: ["extensions"] - resources: ["replicasets"] - verbs: ["get", "list", "watch"] -- apiGroups: ["apps"] - resources: ["replicasets"] - verbs: ["get", "list", "watch"] - ---- -# Source: istio/charts/pilot/templates/clusterrole.yaml -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRole -metadata: - name: istio-pilot-istio-system - labels: - app: istio-pilot - chart: pilot-1.0.0 - heritage: Tiller - release: RELEASE-NAME -rules: -- apiGroups: ["config.istio.io"] - resources: ["*"] - verbs: ["*"] -- apiGroups: ["rbac.istio.io"] - resources: ["*"] - verbs: ["get", "watch", "list"] -- apiGroups: ["networking.istio.io"] - resources: ["*"] - verbs: ["*"] -- apiGroups: ["authentication.istio.io"] - resources: ["*"] - verbs: ["*"] -- apiGroups: ["apiextensions.k8s.io"] - resources: ["customresourcedefinitions"] - verbs: ["*"] -- apiGroups: ["extensions"] - resources: ["thirdpartyresources", "thirdpartyresources.extensions", "ingresses", "ingresses/status"] - verbs: ["*"] -- apiGroups: [""] - resources: ["configmaps"] - verbs: ["create", "get", "list", "watch", "update"] -- apiGroups: [""] - resources: ["endpoints", "pods", "services"] - verbs: ["get", "list", "watch"] -- apiGroups: [""] - resources: ["namespaces", "nodes", "secrets"] - verbs: ["get", "list", "watch"] - ---- -# Source: istio/charts/prometheus/templates/clusterrole.yaml -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRole -metadata: - name: prometheus-istio-system -rules: -- apiGroups: [""] - resources: - - nodes - - services - - endpoints - - pods - - nodes/proxy - verbs: ["get", "list", "watch"] -- apiGroups: [""] - resources: - - configmaps - verbs: ["get"] -- nonResourceURLs: ["/metrics"] - verbs: ["get"] - ---- -# Source: istio/charts/security/templates/clusterrole.yaml -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRole -metadata: - name: istio-citadel-istio-system - labels: - app: security - chart: security-1.0.0 - heritage: Tiller - release: RELEASE-NAME -rules: -- apiGroups: [""] - resources: ["secrets"] - verbs: ["create", "get", "watch", "list", "update", "delete"] -- apiGroups: [""] - resources: ["serviceaccounts"] - verbs: ["get", "watch", "list"] -- apiGroups: [""] - resources: ["services"] - verbs: ["get", "watch", "list"] - ---- -# Source: istio/charts/sidecarInjectorWebhook/templates/clusterrole.yaml -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRole -metadata: - name: istio-sidecar-injector-istio-system - labels: - app: istio-sidecar-injector - chart: sidecarInjectorWebhook-1.0.0 - heritage: Tiller - release: RELEASE-NAME -rules: -- apiGroups: ["*"] - resources: ["configmaps"] - verbs: ["get", "list", "watch"] -- apiGroups: ["admissionregistration.k8s.io"] - resources: ["mutatingwebhookconfigurations"] - verbs: ["get", "list", "watch", "patch"] - ---- -# Source: istio/charts/galley/templates/clusterrolebinding.yaml -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRoleBinding -metadata: - name: istio-galley-admin-role-binding-istio-system - labels: - app: istio-galley - chart: galley-1.0.0 - heritage: Tiller - release: RELEASE-NAME -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: istio-galley-istio-system -subjects: - - kind: ServiceAccount - name: istio-galley-service-account - namespace: istio-system - ---- -# Source: istio/charts/gateways/templates/clusterrolebindings.yaml - -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRoleBinding -metadata: - name: istio-egressgateway-istio-system -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: istio-egressgateway-istio-system -subjects: - - kind: ServiceAccount - name: istio-egressgateway-service-account - namespace: istio-system ---- -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRoleBinding -metadata: - name: istio-ingressgateway-istio-system -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: istio-ingressgateway-istio-system -subjects: - - kind: ServiceAccount - name: istio-ingressgateway-service-account - namespace: istio-system ---- - ---- -# Source: istio/charts/mixer/templates/clusterrolebinding.yaml -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRoleBinding -metadata: - name: istio-mixer-admin-role-binding-istio-system - labels: - app: mixer - chart: mixer-1.0.0 - heritage: Tiller - release: RELEASE-NAME -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: istio-mixer-istio-system -subjects: - - kind: ServiceAccount - name: istio-mixer-service-account - namespace: istio-system - ---- -# Source: istio/charts/pilot/templates/clusterrolebinding.yaml -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRoleBinding -metadata: - name: istio-pilot-istio-system - labels: - app: istio-pilot - chart: pilot-1.0.0 - heritage: Tiller - release: RELEASE-NAME -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: istio-pilot-istio-system -subjects: - - kind: ServiceAccount - name: istio-pilot-service-account - namespace: istio-system - ---- -# Source: istio/charts/prometheus/templates/clusterrolebindings.yaml -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRoleBinding -metadata: - name: prometheus-istio-system -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: prometheus-istio-system -subjects: -- kind: ServiceAccount - name: prometheus - namespace: istio-system - ---- -# Source: istio/charts/security/templates/clusterrolebinding.yaml -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRoleBinding -metadata: - name: istio-citadel-istio-system - labels: - app: security - chart: security-1.0.0 - heritage: Tiller - release: RELEASE-NAME -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: istio-citadel-istio-system -subjects: - - kind: ServiceAccount - name: istio-citadel-service-account - namespace: istio-system - ---- -# Source: istio/charts/sidecarInjectorWebhook/templates/clusterrolebinding.yaml -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRoleBinding -metadata: - name: istio-sidecar-injector-admin-role-binding-istio-system - labels: - app: istio-sidecar-injector - chart: sidecarInjectorWebhook-1.0.0 - heritage: Tiller - release: RELEASE-NAME -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: istio-sidecar-injector-istio-system -subjects: - - kind: ServiceAccount - name: istio-sidecar-injector-service-account - namespace: istio-system - ---- -# Source: istio/charts/galley/templates/service.yaml -apiVersion: v1 -kind: Service -metadata: - name: istio-galley - namespace: istio-system - labels: - istio: galley -spec: - ports: - - port: 443 - name: https-validation - - port: 9093 - name: http-monitoring - selector: - istio: galley - ---- -# Source: istio/charts/gateways/templates/service.yaml - -apiVersion: v1 -kind: Service -metadata: - name: istio-egressgateway - namespace: istio-system - annotations: - labels: - chart: gateways-1.0.0 - release: RELEASE-NAME - heritage: Tiller - app: istio-egressgateway - istio: egressgateway -spec: - type: ClusterIP - selector: - app: istio-egressgateway - istio: egressgateway - ports: - - - name: http2 - port: 80 - - - name: https - port: 443 ---- -apiVersion: v1 -kind: Service -metadata: - name: istio-ingressgateway - namespace: istio-system - annotations: - labels: - chart: gateways-1.0.0 - release: RELEASE-NAME - heritage: Tiller - app: istio-ingressgateway - istio: ingressgateway -spec: - type: LoadBalancer - selector: - app: istio-ingressgateway - istio: ingressgateway - ports: - - - name: http2 - nodePort: 31380 - port: 80 - targetPort: 80 - - - name: https - nodePort: 31390 - port: 443 - - - name: tcp - nodePort: 31400 - port: 31400 - - - name: tcp-pilot-grpc-tls - port: 15011 - targetPort: 15011 - - - name: tcp-citadel-grpc-tls - port: 8060 - targetPort: 8060 - - - name: http2-prometheus - port: 15030 - targetPort: 15030 - - - name: http2-grafana - port: 15031 - targetPort: 15031 ---- - ---- -# Source: istio/charts/grafana/templates/service.yaml -apiVersion: v1 -kind: Service -metadata: - name: grafana - namespace: istio-system - annotations: - labels: - app: grafana - chart: grafana-0.1.0 - release: RELEASE-NAME - heritage: Tiller -spec: - type: ClusterIP - ports: - - port: 3000 - targetPort: 3000 - protocol: TCP - name: http - selector: - app: grafana - ---- -# Source: istio/charts/mixer/templates/service.yaml - -apiVersion: v1 -kind: Service -metadata: - name: istio-policy - namespace: istio-system - labels: - chart: mixer-1.0.0 - release: RELEASE-NAME - istio: mixer -spec: - ports: - - name: grpc-mixer - port: 9091 - - name: grpc-mixer-mtls - port: 15004 - - name: http-monitoring - port: 9093 - selector: - istio: mixer - istio-mixer-type: policy ---- -apiVersion: v1 -kind: Service -metadata: - name: istio-telemetry - namespace: istio-system - labels: - chart: mixer-1.0.0 - release: RELEASE-NAME - istio: mixer -spec: - ports: - - name: grpc-mixer - port: 9091 - - name: grpc-mixer-mtls - port: 15004 - - name: http-monitoring - port: 9093 - - name: prometheus - port: 42422 - selector: - istio: mixer - istio-mixer-type: telemetry ---- - ---- -# Source: istio/charts/mixer/templates/statsdtoprom.yaml - ---- -apiVersion: v1 -kind: Service -metadata: - name: istio-statsd-prom-bridge - namespace: istio-system - labels: - chart: mixer-1.0.0 - release: RELEASE-NAME - istio: statsd-prom-bridge -spec: - ports: - - name: statsd-prom - port: 9102 - - name: statsd-udp - port: 9125 - protocol: UDP - selector: - istio: statsd-prom-bridge - ---- - -apiVersion: extensions/v1beta1 -kind: Deployment -metadata: - name: istio-statsd-prom-bridge - namespace: istio-system - labels: - chart: mixer-1.0.0 - release: RELEASE-NAME - istio: mixer -spec: - template: - metadata: - labels: - istio: statsd-prom-bridge - annotations: - sidecar.istio.io/inject: "false" - spec: - serviceAccountName: istio-mixer-service-account - volumes: - - name: config-volume - configMap: - name: istio-statsd-prom-bridge - containers: - - name: statsd-prom-bridge - image: "docker.io/prom/statsd-exporter:v0.6.0" - imagePullPolicy: IfNotPresent - ports: - - containerPort: 9102 - - containerPort: 9125 - protocol: UDP - args: - - '-statsd.mapping-config=/etc/statsd/mapping.conf' - resources: - requests: - cpu: 10m - - volumeMounts: - - name: config-volume - mountPath: /etc/statsd - ---- -# Source: istio/charts/pilot/templates/service.yaml -apiVersion: v1 -kind: Service -metadata: - name: istio-pilot - namespace: istio-system - labels: - app: istio-pilot - chart: pilot-1.0.0 - release: RELEASE-NAME - heritage: Tiller -spec: - ports: - - port: 15010 - name: grpc-xds # direct - - port: 15011 - name: https-xds # mTLS - - port: 8080 - name: http-legacy-discovery # direct - - port: 9093 - name: http-monitoring - selector: - istio: pilot - ---- -# Source: istio/charts/prometheus/templates/service.yaml -apiVersion: v1 -kind: Service -metadata: - name: prometheus - namespace: istio-system - annotations: - prometheus.io/scrape: 'true' - labels: - name: prometheus -spec: - selector: - app: prometheus - ports: - - name: http-prometheus - protocol: TCP - port: 9090 - ---- -# Source: istio/charts/security/templates/service.yaml -apiVersion: v1 -kind: Service -metadata: - # we use the normal name here (e.g. 'prometheus') - # as grafana is configured to use this as a data source - name: istio-citadel - namespace: istio-system - labels: - app: istio-citadel -spec: - ports: - - name: grpc-citadel - port: 8060 - targetPort: 8060 - protocol: TCP - - name: http-monitoring - port: 9093 - selector: - istio: citadel - ---- -# Source: istio/charts/servicegraph/templates/service.yaml -apiVersion: v1 -kind: Service -metadata: - name: servicegraph - namespace: istio-system - annotations: - labels: - app: servicegraph - chart: servicegraph-0.1.0 - release: RELEASE-NAME - heritage: Tiller -spec: - type: ClusterIP - ports: - - port: 8088 - targetPort: 8088 - protocol: TCP - name: http - selector: - app: servicegraph - ---- -# Source: istio/charts/sidecarInjectorWebhook/templates/service.yaml -apiVersion: v1 -kind: Service -metadata: - name: istio-sidecar-injector - namespace: istio-system - labels: - istio: sidecar-injector -spec: - ports: - - port: 443 - selector: - istio: sidecar-injector - ---- -# Source: istio/charts/galley/templates/deployment.yaml -apiVersion: extensions/v1beta1 -kind: Deployment -metadata: - name: istio-galley - namespace: istio-system - labels: - app: galley - chart: galley-1.0.0 - release: RELEASE-NAME - heritage: Tiller - istio: galley -spec: - replicas: 1 - strategy: - rollingUpdate: - maxSurge: 1 - maxUnavailable: 0 - template: - metadata: - labels: - istio: galley - annotations: - sidecar.istio.io/inject: "false" - scheduler.alpha.kubernetes.io/critical-pod: "" - spec: - serviceAccountName: istio-galley-service-account - containers: - - name: validator - image: "gcr.io/istio-release/galley:1.0.0" - imagePullPolicy: IfNotPresent - ports: - - containerPort: 443 - - containerPort: 9093 - command: - - /usr/local/bin/galley - - validator - - --deployment-namespace=istio-system - - --caCertFile=/etc/istio/certs/root-cert.pem - - --tlsCertFile=/etc/istio/certs/cert-chain.pem - - --tlsKeyFile=/etc/istio/certs/key.pem - - --healthCheckInterval=2s - - --healthCheckFile=/health - - --webhook-config-file - - /etc/istio/config/validatingwebhookconfiguration.yaml - volumeMounts: - - name: certs - mountPath: /etc/istio/certs - readOnly: true - - name: config - mountPath: /etc/istio/config - readOnly: true - livenessProbe: - exec: - command: - - /usr/local/bin/galley - - probe - - --probe-path=/health - - --interval=4s - initialDelaySeconds: 4 - periodSeconds: 4 - readinessProbe: - exec: - command: - - /usr/local/bin/galley - - probe - - --probe-path=/health - - --interval=4s - initialDelaySeconds: 4 - periodSeconds: 4 - resources: - requests: - cpu: 10m - - volumes: - - name: certs - secret: - secretName: istio.istio-galley-service-account - - name: config - configMap: - name: istio-galley-configuration - affinity: - nodeAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - nodeSelectorTerms: - - matchExpressions: - - key: beta.kubernetes.io/arch - operator: In - values: - - amd64 - - ppc64le - - s390x - preferredDuringSchedulingIgnoredDuringExecution: - - weight: 2 - preference: - matchExpressions: - - key: beta.kubernetes.io/arch - operator: In - values: - - amd64 - - weight: 2 - preference: - matchExpressions: - - key: beta.kubernetes.io/arch - operator: In - values: - - ppc64le - - weight: 2 - preference: - matchExpressions: - - key: beta.kubernetes.io/arch - operator: In - values: - - s390x - ---- -# Source: istio/charts/gateways/templates/deployment.yaml - -apiVersion: extensions/v1beta1 -kind: Deployment -metadata: - name: istio-egressgateway - namespace: istio-system - labels: - app: egressgateway - chart: gateways-1.0.0 - release: RELEASE-NAME - heritage: Tiller - app: istio-egressgateway - istio: egressgateway -spec: - replicas: 1 - template: - metadata: - labels: - app: istio-egressgateway - istio: egressgateway - annotations: - sidecar.istio.io/inject: "false" - scheduler.alpha.kubernetes.io/critical-pod: "" - spec: - serviceAccountName: istio-egressgateway-service-account - containers: - - name: egressgateway - image: "gcr.io/istio-release/proxyv2:1.0.0" - imagePullPolicy: IfNotPresent - ports: - - containerPort: 80 - - containerPort: 443 - args: - - proxy - - router - - -v - - "2" - - --discoveryRefreshDelay - - '1s' #discoveryRefreshDelay - - --drainDuration - - '45s' #drainDuration - - --parentShutdownDuration - - '1m0s' #parentShutdownDuration - - --connectTimeout - - '10s' #connectTimeout - - --serviceCluster - - istio-egressgateway - - --zipkinAddress - - zipkin:9411 - - --statsdUdpAddress - - istio-statsd-prom-bridge:9125 - - --proxyAdminPort - - "15000" - - --controlPlaneAuthPolicy - - NONE - - --discoveryAddress - - istio-pilot.istio-system:8080 - resources: - requests: - cpu: 10m - - env: - - name: POD_NAME - valueFrom: - fieldRef: - apiVersion: v1 - fieldPath: metadata.name - - name: POD_NAMESPACE - valueFrom: - fieldRef: - apiVersion: v1 - fieldPath: metadata.namespace - - name: INSTANCE_IP - valueFrom: - fieldRef: - apiVersion: v1 - fieldPath: status.podIP - - name: ISTIO_META_POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - volumeMounts: - - name: istio-certs - mountPath: /etc/certs - readOnly: true - - name: egressgateway-certs - mountPath: "/etc/istio/egressgateway-certs" - readOnly: true - - name: egressgateway-ca-certs - mountPath: "/etc/istio/egressgateway-ca-certs" - readOnly: true - volumes: - - name: istio-certs - secret: - secretName: istio.istio-egressgateway-service-account - optional: true - - name: egressgateway-certs - secret: - secretName: "istio-egressgateway-certs" - optional: true - - name: egressgateway-ca-certs - secret: - secretName: "istio-egressgateway-ca-certs" - optional: true - affinity: - nodeAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - nodeSelectorTerms: - - matchExpressions: - - key: beta.kubernetes.io/arch - operator: In - values: - - amd64 - - ppc64le - - s390x - preferredDuringSchedulingIgnoredDuringExecution: - - weight: 2 - preference: - matchExpressions: - - key: beta.kubernetes.io/arch - operator: In - values: - - amd64 - - weight: 2 - preference: - matchExpressions: - - key: beta.kubernetes.io/arch - operator: In - values: - - ppc64le - - weight: 2 - preference: - matchExpressions: - - key: beta.kubernetes.io/arch - operator: In - values: - - s390x ---- -apiVersion: extensions/v1beta1 -kind: DaemonSet -metadata: - name: istio-ingressgateway - namespace: istio-system - labels: - app: ingressgateway - chart: gateways-1.0.0 - release: RELEASE-NAME - heritage: Tiller - app: istio-ingressgateway - istio: ingressgateway -spec: - # replicas: 1 - template: - metadata: - labels: - app: istio-ingressgateway - istio: ingressgateway - annotations: - sidecar.istio.io/inject: "false" - scheduler.alpha.kubernetes.io/critical-pod: "" - spec: - serviceAccountName: istio-ingressgateway-service-account - containers: - - name: ingressgateway - image: "gcr.io/istio-release/proxyv2:1.0.0" - imagePullPolicy: IfNotPresent - ports: - - containerPort: 80 - - containerPort: 443 - - containerPort: 31400 - - containerPort: 15011 - - containerPort: 8060 - - containerPort: 15030 - - containerPort: 15031 - args: - - proxy - - router - - -v - - "2" - - --discoveryRefreshDelay - - '1s' #discoveryRefreshDelay - - --drainDuration - - '45s' #drainDuration - - --parentShutdownDuration - - '1m0s' #parentShutdownDuration - - --connectTimeout - - '10s' #connectTimeout - - --serviceCluster - - istio-ingressgateway - - --zipkinAddress - - zipkin:9411 - - --statsdUdpAddress - - istio-statsd-prom-bridge:9125 - - --proxyAdminPort - - "15000" - - --controlPlaneAuthPolicy - - NONE - - --discoveryAddress - - istio-pilot.istio-system:8080 - resources: - requests: - cpu: 10m - - env: - - name: POD_NAME - valueFrom: - fieldRef: - apiVersion: v1 - fieldPath: metadata.name - - name: POD_NAMESPACE - valueFrom: - fieldRef: - apiVersion: v1 - fieldPath: metadata.namespace - - name: INSTANCE_IP - valueFrom: - fieldRef: - apiVersion: v1 - fieldPath: status.podIP - - name: ISTIO_META_POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - volumeMounts: - - name: istio-certs - mountPath: /etc/certs - readOnly: true - - name: ingressgateway-certs - mountPath: "/etc/istio/ingressgateway-certs" - readOnly: true - - name: ingressgateway-ca-certs - mountPath: "/etc/istio/ingressgateway-ca-certs" - readOnly: true - volumes: - - name: istio-certs - secret: - secretName: istio.istio-ingressgateway-service-account - optional: true - - name: ingressgateway-certs - secret: - secretName: "istio-ingressgateway-certs" - optional: true - - name: ingressgateway-ca-certs - secret: - secretName: "istio-ingressgateway-ca-certs" - optional: true - affinity: - nodeAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - nodeSelectorTerms: - - matchExpressions: - - key: beta.kubernetes.io/arch - operator: In - values: - - amd64 - - ppc64le - - s390x - preferredDuringSchedulingIgnoredDuringExecution: - - weight: 2 - preference: - matchExpressions: - - key: beta.kubernetes.io/arch - operator: In - values: - - amd64 - - weight: 2 - preference: - matchExpressions: - - key: beta.kubernetes.io/arch - operator: In - values: - - ppc64le - - weight: 2 - preference: - matchExpressions: - - key: beta.kubernetes.io/arch - operator: In - values: - - s390x ---- - ---- -# Source: istio/charts/grafana/templates/deployment.yaml -apiVersion: extensions/v1beta1 -kind: Deployment -metadata: - name: grafana - namespace: istio-system - labels: - app: grafana - chart: grafana-0.1.0 - release: RELEASE-NAME - heritage: Tiller -spec: - replicas: 1 - template: - metadata: - labels: - app: grafana - annotations: - sidecar.istio.io/inject: "false" - scheduler.alpha.kubernetes.io/critical-pod: "" - spec: - containers: - - name: grafana - image: "gcr.io/istio-release/grafana:1.0.0" - imagePullPolicy: IfNotPresent - ports: - - containerPort: 3000 - readinessProbe: - httpGet: - path: /login - port: 3000 - env: - - name: GRAFANA_PORT - value: "3000" - - name: GF_AUTH_BASIC_ENABLED - value: "false" - - name: GF_AUTH_ANONYMOUS_ENABLED - value: "true" - - name: GF_AUTH_ANONYMOUS_ORG_ROLE - value: Admin - - name: GF_PATHS_DATA - value: /data/grafana - resources: - requests: - cpu: 10m - - volumeMounts: - - name: data - mountPath: /data/grafana - affinity: - nodeAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - nodeSelectorTerms: - - matchExpressions: - - key: beta.kubernetes.io/arch - operator: In - values: - - amd64 - - ppc64le - - s390x - preferredDuringSchedulingIgnoredDuringExecution: - - weight: 2 - preference: - matchExpressions: - - key: beta.kubernetes.io/arch - operator: In - values: - - amd64 - - weight: 2 - preference: - matchExpressions: - - key: beta.kubernetes.io/arch - operator: In - values: - - ppc64le - - weight: 2 - preference: - matchExpressions: - - key: beta.kubernetes.io/arch - operator: In - values: - - s390x - volumes: - - name: data - emptyDir: {} - ---- -# Source: istio/charts/mixer/templates/deployment.yaml - -apiVersion: extensions/v1beta1 -kind: Deployment -metadata: - name: istio-policy - namespace: istio-system - labels: - chart: mixer-1.0.0 - release: RELEASE-NAME - istio: mixer -spec: - replicas: 1 - template: - metadata: - labels: - app: policy - istio: mixer - istio-mixer-type: policy - annotations: - sidecar.istio.io/inject: "false" - scheduler.alpha.kubernetes.io/critical-pod: "" - spec: - serviceAccountName: istio-mixer-service-account - volumes: - - name: istio-certs - secret: - secretName: istio.istio-mixer-service-account - optional: true - - name: uds-socket - emptyDir: {} - affinity: - nodeAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - nodeSelectorTerms: - - matchExpressions: - - key: beta.kubernetes.io/arch - operator: In - values: - - amd64 - - ppc64le - - s390x - preferredDuringSchedulingIgnoredDuringExecution: - - weight: 2 - preference: - matchExpressions: - - key: beta.kubernetes.io/arch - operator: In - values: - - amd64 - - weight: 2 - preference: - matchExpressions: - - key: beta.kubernetes.io/arch - operator: In - values: - - ppc64le - - weight: 2 - preference: - matchExpressions: - - key: beta.kubernetes.io/arch - operator: In - values: - - s390x - containers: - - name: mixer - image: "gcr.io/istio-release/mixer:1.0.0" - imagePullPolicy: IfNotPresent - ports: - - containerPort: 9093 - - containerPort: 42422 - args: - - --address - - unix:///sock/mixer.socket - - --configStoreURL=k8s:// - - --configDefaultNamespace=istio-system - - --trace_zipkin_url=http://zipkin:9411/api/v1/spans - resources: - requests: - cpu: 10m - - volumeMounts: - - name: uds-socket - mountPath: /sock - livenessProbe: - httpGet: - path: /version - port: 9093 - initialDelaySeconds: 5 - periodSeconds: 5 - - name: istio-proxy - image: "gcr.io/istio-release/proxyv2:1.0.0" - imagePullPolicy: IfNotPresent - ports: - - containerPort: 9091 - - containerPort: 15004 - args: - - proxy - - --serviceCluster - - istio-policy - - --templateFile - - /etc/istio/proxy/envoy_policy.yaml.tmpl - - --controlPlaneAuthPolicy - - NONE - env: - - name: POD_NAME - valueFrom: - fieldRef: - apiVersion: v1 - fieldPath: metadata.name - - name: POD_NAMESPACE - valueFrom: - fieldRef: - apiVersion: v1 - fieldPath: metadata.namespace - - name: INSTANCE_IP - valueFrom: - fieldRef: - apiVersion: v1 - fieldPath: status.podIP - resources: - requests: - cpu: 10m - - volumeMounts: - - name: istio-certs - mountPath: /etc/certs - readOnly: true - - name: uds-socket - mountPath: /sock - ---- -apiVersion: extensions/v1beta1 -kind: Deployment -metadata: - name: istio-telemetry - namespace: istio-system - labels: - chart: mixer-1.0.0 - release: RELEASE-NAME - istio: mixer -spec: - replicas: 1 - template: - metadata: - labels: - app: telemetry - istio: mixer - istio-mixer-type: telemetry - annotations: - sidecar.istio.io/inject: "false" - scheduler.alpha.kubernetes.io/critical-pod: "" - spec: - serviceAccountName: istio-mixer-service-account - volumes: - - name: istio-certs - secret: - secretName: istio.istio-mixer-service-account - optional: true - - name: uds-socket - emptyDir: {} - containers: - - name: mixer - image: "gcr.io/istio-release/mixer:1.0.0" - imagePullPolicy: IfNotPresent - ports: - - containerPort: 9093 - - containerPort: 42422 - args: - - --address - - unix:///sock/mixer.socket - - --configStoreURL=k8s:// - - --configDefaultNamespace=istio-system - - --trace_zipkin_url=http://zipkin:9411/api/v1/spans - resources: - requests: - cpu: 10m - - volumeMounts: - - name: uds-socket - mountPath: /sock - livenessProbe: - httpGet: - path: /version - port: 9093 - initialDelaySeconds: 5 - periodSeconds: 5 - - name: istio-proxy - image: "gcr.io/istio-release/proxyv2:1.0.0" - imagePullPolicy: IfNotPresent - ports: - - containerPort: 9091 - - containerPort: 15004 - args: - - proxy - - --serviceCluster - - istio-telemetry - - --templateFile - - /etc/istio/proxy/envoy_telemetry.yaml.tmpl - - --controlPlaneAuthPolicy - - NONE - env: - - name: POD_NAME - valueFrom: - fieldRef: - apiVersion: v1 - fieldPath: metadata.name - - name: POD_NAMESPACE - valueFrom: - fieldRef: - apiVersion: v1 - fieldPath: metadata.namespace - - name: INSTANCE_IP - valueFrom: - fieldRef: - apiVersion: v1 - fieldPath: status.podIP - resources: - requests: - cpu: 10m - - volumeMounts: - - name: istio-certs - mountPath: /etc/certs - readOnly: true - - name: uds-socket - mountPath: /sock - ---- - ---- -# Source: istio/charts/pilot/templates/deployment.yaml -apiVersion: extensions/v1beta1 -kind: Deployment -metadata: - name: istio-pilot - namespace: istio-system - # TODO: default template doesn't have this, which one is right ? - labels: - app: istio-pilot - chart: pilot-1.0.0 - release: RELEASE-NAME - heritage: Tiller - istio: pilot - annotations: - checksum/config-volume: f8da08b6b8c170dde721efd680270b2901e750d4aa186ebb6c22bef5b78a43f9 -spec: - replicas: 1 - template: - metadata: - labels: - istio: pilot - app: pilot - annotations: - sidecar.istio.io/inject: "false" - scheduler.alpha.kubernetes.io/critical-pod: "" - spec: - serviceAccountName: istio-pilot-service-account - containers: - - name: discovery - image: "gcr.io/istio-release/pilot:1.0.0" - imagePullPolicy: IfNotPresent - args: - - "discovery" - ports: - - containerPort: 8080 - - containerPort: 15010 - readinessProbe: - httpGet: - path: /debug/endpointz - port: 8080 - initialDelaySeconds: 30 - periodSeconds: 30 - timeoutSeconds: 5 - env: - - name: POD_NAME - valueFrom: - fieldRef: - apiVersion: v1 - fieldPath: metadata.name - - name: POD_NAMESPACE - valueFrom: - fieldRef: - apiVersion: v1 - fieldPath: metadata.namespace - - name: PILOT_THROTTLE - value: "500" - - name: PILOT_CACHE_SQUASH - value: "5" - - name: PILOT_TRACE_SAMPLING - value: "100" - resources: - requests: - cpu: 500m - memory: 2048Mi - - volumeMounts: - - name: config-volume - mountPath: /etc/istio/config - - name: istio-certs - mountPath: /etc/certs - readOnly: true - - name: istio-proxy - image: "gcr.io/istio-release/proxyv2:1.0.0" - imagePullPolicy: IfNotPresent - ports: - - containerPort: 15003 - - containerPort: 15005 - - containerPort: 15007 - - containerPort: 15011 - args: - - proxy - - --serviceCluster - - istio-pilot - - --templateFile - - /etc/istio/proxy/envoy_pilot.yaml.tmpl - - --controlPlaneAuthPolicy - - NONE - env: - - name: POD_NAME - valueFrom: - fieldRef: - apiVersion: v1 - fieldPath: metadata.name - - name: POD_NAMESPACE - valueFrom: - fieldRef: - apiVersion: v1 - fieldPath: metadata.namespace - - name: INSTANCE_IP - valueFrom: - fieldRef: - apiVersion: v1 - fieldPath: status.podIP - resources: - requests: - cpu: 10m - - volumeMounts: - - name: istio-certs - mountPath: /etc/certs - readOnly: true - volumes: - - name: config-volume - configMap: - name: istio - - name: istio-certs - secret: - secretName: istio.istio-pilot-service-account - affinity: - nodeAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - nodeSelectorTerms: - - matchExpressions: - - key: beta.kubernetes.io/arch - operator: In - values: - - amd64 - - ppc64le - - s390x - preferredDuringSchedulingIgnoredDuringExecution: - - weight: 2 - preference: - matchExpressions: - - key: beta.kubernetes.io/arch - operator: In - values: - - amd64 - - weight: 2 - preference: - matchExpressions: - - key: beta.kubernetes.io/arch - operator: In - values: - - ppc64le - - weight: 2 - preference: - matchExpressions: - - key: beta.kubernetes.io/arch - operator: In - values: - - s390x - ---- -# Source: istio/charts/prometheus/templates/deployment.yaml -# TODO: the original template has service account, roles, etc -apiVersion: extensions/v1beta1 -kind: Deployment -metadata: - name: prometheus - namespace: istio-system - labels: - app: prometheus - chart: prometheus-0.1.0 - release: RELEASE-NAME - heritage: Tiller -spec: - replicas: 1 - selector: - matchLabels: - app: prometheus - template: - metadata: - labels: - app: prometheus - annotations: - sidecar.istio.io/inject: "false" - scheduler.alpha.kubernetes.io/critical-pod: "" - spec: - serviceAccountName: prometheus - containers: - - name: prometheus - image: "docker.io/prom/prometheus:v2.3.1" - imagePullPolicy: IfNotPresent - args: - - '--storage.tsdb.retention=6h' - - '--config.file=/etc/prometheus/prometheus.yml' - ports: - - containerPort: 9090 - name: http - livenessProbe: - httpGet: - path: /-/healthy - port: 9090 - readinessProbe: - httpGet: - path: /-/ready - port: 9090 - resources: - requests: - cpu: 10m - - volumeMounts: - - name: config-volume - mountPath: /etc/prometheus - volumes: - - name: config-volume - configMap: - name: prometheus - affinity: - nodeAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - nodeSelectorTerms: - - matchExpressions: - - key: beta.kubernetes.io/arch - operator: In - values: - - amd64 - - ppc64le - - s390x - preferredDuringSchedulingIgnoredDuringExecution: - - weight: 2 - preference: - matchExpressions: - - key: beta.kubernetes.io/arch - operator: In - values: - - amd64 - - weight: 2 - preference: - matchExpressions: - - key: beta.kubernetes.io/arch - operator: In - values: - - ppc64le - - weight: 2 - preference: - matchExpressions: - - key: beta.kubernetes.io/arch - operator: In - values: - - s390x - ---- -# Source: istio/charts/security/templates/deployment.yaml -# istio CA watching all namespaces -apiVersion: extensions/v1beta1 -kind: Deployment -metadata: - name: istio-citadel - namespace: istio-system - labels: - app: security - chart: security-1.0.0 - release: RELEASE-NAME - heritage: Tiller - istio: citadel -spec: - replicas: 1 - template: - metadata: - labels: - istio: citadel - annotations: - sidecar.istio.io/inject: "false" - scheduler.alpha.kubernetes.io/critical-pod: "" - spec: - serviceAccountName: istio-citadel-service-account - containers: - - name: citadel - image: "gcr.io/istio-release/citadel:1.0.0" - imagePullPolicy: IfNotPresent - args: - - --append-dns-names=true - - --grpc-port=8060 - - --grpc-hostname=citadel - - --citadel-storage-namespace=istio-system - - --self-signed-ca=true - resources: - requests: - cpu: 10m - - affinity: - nodeAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - nodeSelectorTerms: - - matchExpressions: - - key: beta.kubernetes.io/arch - operator: In - values: - - amd64 - - ppc64le - - s390x - preferredDuringSchedulingIgnoredDuringExecution: - - weight: 2 - preference: - matchExpressions: - - key: beta.kubernetes.io/arch - operator: In - values: - - amd64 - - weight: 2 - preference: - matchExpressions: - - key: beta.kubernetes.io/arch - operator: In - values: - - ppc64le - - weight: 2 - preference: - matchExpressions: - - key: beta.kubernetes.io/arch - operator: In - values: - - s390x - ---- -# Source: istio/charts/servicegraph/templates/deployment.yaml -apiVersion: extensions/v1beta1 -kind: Deployment -metadata: - name: servicegraph - namespace: istio-system - labels: - app: servicegraph - chart: servicegraph-0.1.0 - release: RELEASE-NAME - heritage: Tiller -spec: - replicas: 1 - template: - metadata: - labels: - app: servicegraph - annotations: - sidecar.istio.io/inject: "false" - scheduler.alpha.kubernetes.io/critical-pod: "" - spec: - containers: - - name: servicegraph - image: "gcr.io/istio-release/servicegraph:1.0.0" - imagePullPolicy: IfNotPresent - ports: - - containerPort: 8088 - args: - - --prometheusAddr=http://prometheus:9090 - livenessProbe: - httpGet: - path: /graph - port: 8088 - readinessProbe: - httpGet: - path: /graph - port: 8088 - resources: - requests: - cpu: 10m - - affinity: - nodeAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - nodeSelectorTerms: - - matchExpressions: - - key: beta.kubernetes.io/arch - operator: In - values: - - amd64 - - ppc64le - - s390x - preferredDuringSchedulingIgnoredDuringExecution: - - weight: 2 - preference: - matchExpressions: - - key: beta.kubernetes.io/arch - operator: In - values: - - amd64 - - weight: 2 - preference: - matchExpressions: - - key: beta.kubernetes.io/arch - operator: In - values: - - ppc64le - - weight: 2 - preference: - matchExpressions: - - key: beta.kubernetes.io/arch - operator: In - values: - - s390x - ---- -# Source: istio/charts/sidecarInjectorWebhook/templates/deployment.yaml -apiVersion: extensions/v1beta1 -kind: Deployment -metadata: - name: istio-sidecar-injector - namespace: istio-system - labels: - app: sidecarInjectorWebhook - chart: sidecarInjectorWebhook-1.0.0 - release: RELEASE-NAME - heritage: Tiller - istio: sidecar-injector -spec: - replicas: 1 - template: - metadata: - labels: - istio: sidecar-injector - annotations: - sidecar.istio.io/inject: "false" - scheduler.alpha.kubernetes.io/critical-pod: "" - spec: - serviceAccountName: istio-sidecar-injector-service-account - containers: - - name: sidecar-injector-webhook - image: "gcr.io/istio-release/sidecar_injector:1.0.0" - imagePullPolicy: IfNotPresent - args: - - --caCertFile=/etc/istio/certs/root-cert.pem - - --tlsCertFile=/etc/istio/certs/cert-chain.pem - - --tlsKeyFile=/etc/istio/certs/key.pem - - --injectConfig=/etc/istio/inject/config - - --meshConfig=/etc/istio/config/mesh - - --healthCheckInterval=2s - - --healthCheckFile=/health - volumeMounts: - - name: config-volume - mountPath: /etc/istio/config - readOnly: true - - name: certs - mountPath: /etc/istio/certs - readOnly: true - - name: inject-config - mountPath: /etc/istio/inject - readOnly: true - livenessProbe: - exec: - command: - - /usr/local/bin/sidecar-injector - - probe - - --probe-path=/health - - --interval=4s - initialDelaySeconds: 4 - periodSeconds: 4 - readinessProbe: - exec: - command: - - /usr/local/bin/sidecar-injector - - probe - - --probe-path=/health - - --interval=4s - initialDelaySeconds: 4 - periodSeconds: 4 - resources: - requests: - cpu: 10m - - volumes: - - name: config-volume - configMap: - name: istio - - name: certs - secret: - secretName: istio.istio-sidecar-injector-service-account - - name: inject-config - configMap: - name: istio-sidecar-injector - items: - - key: config - path: config - affinity: - nodeAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - nodeSelectorTerms: - - matchExpressions: - - key: beta.kubernetes.io/arch - operator: In - values: - - amd64 - - ppc64le - - s390x - preferredDuringSchedulingIgnoredDuringExecution: - - weight: 2 - preference: - matchExpressions: - - key: beta.kubernetes.io/arch - operator: In - values: - - amd64 - - weight: 2 - preference: - matchExpressions: - - key: beta.kubernetes.io/arch - operator: In - values: - - ppc64le - - weight: 2 - preference: - matchExpressions: - - key: beta.kubernetes.io/arch - operator: In - values: - - s390x - ---- -# Source: istio/charts/tracing/templates/deployment.yaml -apiVersion: extensions/v1beta1 -kind: Deployment -metadata: - name: istio-tracing - namespace: istio-system - labels: - app: istio-tracing - chart: tracing-0.1.0 - release: RELEASE-NAME - heritage: Tiller -spec: - replicas: 1 - template: - metadata: - labels: - app: jaeger - annotations: - sidecar.istio.io/inject: "false" - scheduler.alpha.kubernetes.io/critical-pod: "" - spec: - containers: - - name: jaeger - image: "docker.io/jaegertracing/all-in-one:1.5" - imagePullPolicy: IfNotPresent - ports: - - containerPort: 9411 - - containerPort: 16686 - - containerPort: 5775 - protocol: UDP - - containerPort: 6831 - protocol: UDP - - containerPort: 6832 - protocol: UDP - env: - - name: POD_NAMESPACE - valueFrom: - fieldRef: - apiVersion: v1 - fieldPath: metadata.namespace - - name: COLLECTOR_ZIPKIN_HTTP_PORT - value: "9411" - - name: MEMORY_MAX_TRACES - value: "50000" - livenessProbe: - httpGet: - path: / - port: 16686 - readinessProbe: - httpGet: - path: / - port: 16686 - resources: - requests: - cpu: 10m - - affinity: - nodeAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - nodeSelectorTerms: - - matchExpressions: - - key: beta.kubernetes.io/arch - operator: In - values: - - amd64 - - ppc64le - - s390x - preferredDuringSchedulingIgnoredDuringExecution: - - weight: 2 - preference: - matchExpressions: - - key: beta.kubernetes.io/arch - operator: In - values: - - amd64 - - weight: 2 - preference: - matchExpressions: - - key: beta.kubernetes.io/arch - operator: In - values: - - ppc64le - - weight: 2 - preference: - matchExpressions: - - key: beta.kubernetes.io/arch - operator: In - values: - - s390x - ---- -# Source: istio/charts/pilot/templates/gateway.yaml -apiVersion: networking.istio.io/v1alpha3 -kind: Gateway -metadata: - name: istio-autogenerated-k8s-ingress - namespace: istio-system -spec: - selector: - istio: ingress - servers: - - port: - number: 80 - protocol: HTTP2 - name: http - hosts: - - "*" - ---- - ---- -# Source: istio/charts/gateways/templates/autoscale.yaml - -apiVersion: autoscaling/v2beta1 -kind: HorizontalPodAutoscaler -metadata: - name: istio-egressgateway - namespace: istio-system -spec: - maxReplicas: 5 - minReplicas: 1 - scaleTargetRef: - apiVersion: apps/v1beta1 - kind: Deployment - name: istio-egressgateway - metrics: - - type: Resource - resource: - name: cpu - targetAverageUtilization: 60 ---- -apiVersion: autoscaling/v2beta1 -kind: HorizontalPodAutoscaler -metadata: - name: istio-ingressgateway - namespace: istio-system -spec: - maxReplicas: 5 - minReplicas: 1 - scaleTargetRef: - apiVersion: apps/v1beta1 - kind: Deployment - name: istio-ingressgateway - metrics: - - type: Resource - resource: - name: cpu - targetAverageUtilization: 60 ---- - ---- -# Source: istio/charts/mixer/templates/autoscale.yaml - -apiVersion: autoscaling/v2beta1 -kind: HorizontalPodAutoscaler -metadata: - name: istio-policy - namespace: istio-system -spec: - maxReplicas: 5 - minReplicas: 1 - scaleTargetRef: - apiVersion: apps/v1beta1 - kind: Deployment - name: istio-policy - metrics: - - type: Resource - resource: - name: cpu - targetAverageUtilization: 80 ---- -apiVersion: autoscaling/v2beta1 -kind: HorizontalPodAutoscaler -metadata: - name: istio-telemetry - namespace: istio-system -spec: - maxReplicas: 5 - minReplicas: 1 - scaleTargetRef: - apiVersion: apps/v1beta1 - kind: Deployment - name: istio-telemetry - metrics: - - type: Resource - resource: - name: cpu - targetAverageUtilization: 80 ---- - ---- -# Source: istio/charts/pilot/templates/autoscale.yaml - -apiVersion: autoscaling/v2beta1 -kind: HorizontalPodAutoscaler -metadata: - name: istio-pilot -spec: - maxReplicas: 1 - minReplicas: 1 - scaleTargetRef: - apiVersion: apps/v1beta1 - kind: Deployment - name: istio-pilot - metrics: - - type: Resource - resource: - name: cpu - targetAverageUtilization: 55 ---- - ---- -# Source: istio/charts/tracing/templates/service-jaeger.yaml - - -apiVersion: v1 -kind: List -items: -- apiVersion: v1 - kind: Service - metadata: - name: jaeger-query - namespace: istio-system - annotations: - labels: - app: jaeger - jaeger-infra: jaeger-service - chart: tracing-0.1.0 - release: RELEASE-NAME - heritage: Tiller - spec: - ports: - - name: query-http - port: 16686 - protocol: TCP - targetPort: 16686 - selector: - app: jaeger -- apiVersion: v1 - kind: Service - metadata: - name: jaeger-collector - namespace: istio-system - labels: - app: jaeger - jaeger-infra: collector-service - chart: tracing-0.1.0 - release: RELEASE-NAME - heritage: Tiller - spec: - ports: - - name: jaeger-collector-tchannel - port: 14267 - protocol: TCP - targetPort: 14267 - - name: jaeger-collector-http - port: 14268 - targetPort: 14268 - protocol: TCP - selector: - app: jaeger - type: ClusterIP -- apiVersion: v1 - kind: Service - metadata: - name: jaeger-agent - namespace: istio-system - labels: - app: jaeger - jaeger-infra: agent-service - chart: tracing-0.1.0 - release: RELEASE-NAME - heritage: Tiller - spec: - ports: - - name: agent-zipkin-thrift - port: 5775 - protocol: UDP - targetPort: 5775 - - name: agent-compact - port: 6831 - protocol: UDP - targetPort: 6831 - - name: agent-binary - port: 6832 - protocol: UDP - targetPort: 6832 - clusterIP: None - selector: - app: jaeger - - - ---- -# Source: istio/charts/tracing/templates/service.yaml -apiVersion: v1 -kind: List -items: -- apiVersion: v1 - kind: Service - metadata: - name: zipkin - namespace: istio-system - labels: - app: jaeger - chart: tracing-0.1.0 - release: RELEASE-NAME - heritage: Tiller - spec: - type: ClusterIP - ports: - - port: 9411 - targetPort: 9411 - protocol: TCP - name: http - selector: - app: jaeger -- apiVersion: v1 - kind: Service - metadata: - name: tracing - namespace: istio-system - annotations: - labels: - app: jaeger - chart: tracing-0.1.0 - release: RELEASE-NAME - heritage: Tiller - spec: - ports: - - name: http-query - port: 80 - protocol: TCP - targetPort: 16686 - selector: - app: jaeger - ---- -# Source: istio/charts/sidecarInjectorWebhook/templates/mutatingwebhook.yaml -apiVersion: admissionregistration.k8s.io/v1beta1 -kind: MutatingWebhookConfiguration -metadata: - name: istio-sidecar-injector - namespace: istio-system - labels: - app: istio-sidecar-injector - chart: sidecarInjectorWebhook-1.0.0 - release: RELEASE-NAME - heritage: Tiller -webhooks: - - name: sidecar-injector.istio.io - clientConfig: - service: - name: istio-sidecar-injector - namespace: istio-system - path: "/inject" - caBundle: "" - rules: - - operations: [ "CREATE" ] - apiGroups: [""] - apiVersions: ["v1"] - resources: ["pods"] - failurePolicy: Fail - namespaceSelector: - matchLabels: - istio-injection: enabled - - ---- -# Source: istio/charts/galley/templates/validatingwehookconfiguration.yaml.tpl - - ---- -# Source: istio/charts/grafana/templates/grafana-ports-mtls.yaml - - ---- -# Source: istio/charts/grafana/templates/secret.yaml - ---- -# Source: istio/charts/pilot/templates/meshexpansion.yaml - - ---- -# Source: istio/charts/security/templates/create-custom-resources-job.yaml - - ---- -# Source: istio/charts/security/templates/enable-mesh-mtls.yaml - - ---- -# Source: istio/charts/security/templates/meshexpansion.yaml - - ---- - ---- -# Source: istio/charts/servicegraph/templates/ingress.yaml - ---- -# Source: istio/charts/telemetry-gateway/templates/gateway.yaml - - ---- -# Source: istio/charts/tracing/templates/ingress-jaeger.yaml - ---- -# Source: istio/charts/tracing/templates/ingress.yaml - ---- -# Source: istio/templates/install-custom-resources.sh.tpl - - ---- -# Source: istio/charts/mixer/templates/config.yaml -apiVersion: "config.istio.io/v1alpha2" -kind: attributemanifest -metadata: - name: istioproxy - namespace: istio-system -spec: - attributes: - origin.ip: - valueType: IP_ADDRESS - origin.uid: - valueType: STRING - origin.user: - valueType: STRING - request.headers: - valueType: STRING_MAP - request.id: - valueType: STRING - request.host: - valueType: STRING - request.method: - valueType: STRING - request.path: - valueType: STRING - request.reason: - valueType: STRING - request.referer: - valueType: STRING - request.scheme: - valueType: STRING - request.total_size: - valueType: INT64 - request.size: - valueType: INT64 - request.time: - valueType: TIMESTAMP - request.useragent: - valueType: STRING - response.code: - valueType: INT64 - response.duration: - valueType: DURATION - response.headers: - valueType: STRING_MAP - response.total_size: - valueType: INT64 - response.size: - valueType: INT64 - response.time: - valueType: TIMESTAMP - source.uid: - valueType: STRING - source.user: # DEPRECATED - valueType: STRING - source.principal: - valueType: STRING - destination.uid: - valueType: STRING - destination.principal: - valueType: STRING - destination.port: - valueType: INT64 - connection.event: - valueType: STRING - connection.id: - valueType: STRING - connection.received.bytes: - valueType: INT64 - connection.received.bytes_total: - valueType: INT64 - connection.sent.bytes: - valueType: INT64 - connection.sent.bytes_total: - valueType: INT64 - connection.duration: - valueType: DURATION - connection.mtls: - valueType: BOOL - context.protocol: - valueType: STRING - context.timestamp: - valueType: TIMESTAMP - context.time: - valueType: TIMESTAMP - # Deprecated, kept for compatibility - context.reporter.local: - valueType: BOOL - context.reporter.kind: - valueType: STRING - context.reporter.uid: - valueType: STRING - api.service: - valueType: STRING - api.version: - valueType: STRING - api.operation: - valueType: STRING - api.protocol: - valueType: STRING - request.auth.principal: - valueType: STRING - request.auth.audiences: - valueType: STRING - request.auth.presenter: - valueType: STRING - request.auth.claims: - valueType: STRING_MAP - request.auth.raw_claims: - valueType: STRING - request.api_key: - valueType: STRING - ---- -apiVersion: "config.istio.io/v1alpha2" -kind: attributemanifest -metadata: - name: kubernetes - namespace: istio-system -spec: - attributes: - source.ip: - valueType: IP_ADDRESS - source.labels: - valueType: STRING_MAP - source.metadata: - valueType: STRING_MAP - source.name: - valueType: STRING - source.namespace: - valueType: STRING - source.owner: - valueType: STRING - source.service: # DEPRECATED - valueType: STRING - source.serviceAccount: - valueType: STRING - source.services: - valueType: STRING - source.workload.uid: - valueType: STRING - source.workload.name: - valueType: STRING - source.workload.namespace: - valueType: STRING - destination.ip: - valueType: IP_ADDRESS - destination.labels: - valueType: STRING_MAP - destination.metadata: - valueType: STRING_MAP - destination.owner: - valueType: STRING - destination.name: - valueType: STRING - destination.container.name: - valueType: STRING - destination.namespace: - valueType: STRING - destination.service: # DEPRECATED - valueType: STRING - destination.service.uid: - valueType: STRING - destination.service.name: - valueType: STRING - destination.service.namespace: - valueType: STRING - destination.service.host: - valueType: STRING - destination.serviceAccount: - valueType: STRING - destination.workload.uid: - valueType: STRING - destination.workload.name: - valueType: STRING - destination.workload.namespace: - valueType: STRING ---- -apiVersion: "config.istio.io/v1alpha2" -kind: stdio -metadata: - name: handler - namespace: istio-system -spec: - outputAsJson: true ---- -apiVersion: "config.istio.io/v1alpha2" -kind: logentry -metadata: - name: accesslog - namespace: istio-system -spec: - severity: '"Info"' - timestamp: request.time - variables: - sourceIp: source.ip | ip("0.0.0.0") - sourceApp: source.labels["app"] | "" - sourcePrincipal: source.principal | "" - sourceName: source.name | "" - sourceWorkload: source.workload.name | "" - sourceNamespace: source.namespace | "" - sourceOwner: source.owner | "" - destinationApp: destination.labels["app"] | "" - destinationIp: destination.ip | ip("0.0.0.0") - destinationServiceHost: destination.service.host | "" - destinationWorkload: destination.workload.name | "" - destinationName: destination.name | "" - destinationNamespace: destination.namespace | "" - destinationOwner: destination.owner | "" - destinationPrincipal: destination.principal | "" - apiClaims: request.auth.raw_claims | "" - apiKey: request.api_key | request.headers["x-api-key"] | "" - protocol: request.scheme | context.protocol | "http" - method: request.method | "" - url: request.path | "" - responseCode: response.code | 0 - responseSize: response.size | 0 - requestSize: request.size | 0 - requestId: request.headers["x-request-id"] | "" - clientTraceId: request.headers["x-client-trace-id"] | "" - latency: response.duration | "0ms" - connection_security_policy: conditional((context.reporter.kind | "inbound") == "outbound", "unknown", conditional(connection.mtls | false, "mutual_tls", "none")) - userAgent: request.useragent | "" - responseTimestamp: response.time - receivedBytes: request.total_size | 0 - sentBytes: response.total_size | 0 - referer: request.referer | "" - httpAuthority: request.headers[":authority"] | request.host | "" - xForwardedFor: request.headers["x-forwarded-for"] | "0.0.0.0" - reporter: conditional((context.reporter.kind | "inbound") == "outbound", "source", "destination") - monitored_resource_type: '"global"' ---- -apiVersion: "config.istio.io/v1alpha2" -kind: logentry -metadata: - name: tcpaccesslog - namespace: istio-system -spec: - severity: '"Info"' - timestamp: context.time | timestamp("2017-01-01T00:00:00Z") - variables: - connectionEvent: connection.event | "" - sourceIp: source.ip | ip("0.0.0.0") - sourceApp: source.labels["app"] | "" - sourcePrincipal: source.principal | "" - sourceName: source.name | "" - sourceWorkload: source.workload.name | "" - sourceNamespace: source.namespace | "" - sourceOwner: source.owner | "" - destinationApp: destination.labels["app"] | "" - destinationIp: destination.ip | ip("0.0.0.0") - destinationServiceHost: destination.service.host | "" - destinationWorkload: destination.workload.name | "" - destinationName: destination.name | "" - destinationNamespace: destination.namespace | "" - destinationOwner: destination.owner | "" - destinationPrincipal: destination.principal | "" - protocol: context.protocol | "tcp" - connectionDuration: connection.duration | "0ms" - connection_security_policy: conditional((context.reporter.kind | "inbound") == "outbound", "unknown", conditional(connection.mtls | false, "mutual_tls", "none")) - receivedBytes: connection.received.bytes | 0 - sentBytes: connection.sent.bytes | 0 - totalReceivedBytes: connection.received.bytes_total | 0 - totalSentBytes: connection.sent.bytes_total | 0 - reporter: conditional((context.reporter.kind | "inbound") == "outbound", "source", "destination") - monitored_resource_type: '"global"' ---- -apiVersion: "config.istio.io/v1alpha2" -kind: rule -metadata: - name: stdio - namespace: istio-system -spec: - match: context.protocol == "http" || context.protocol == "grpc" - actions: - - handler: handler.stdio - instances: - - accesslog.logentry ---- -apiVersion: "config.istio.io/v1alpha2" -kind: rule -metadata: - name: stdiotcp - namespace: istio-system -spec: - match: context.protocol == "tcp" - actions: - - handler: handler.stdio - instances: - - tcpaccesslog.logentry ---- -apiVersion: "config.istio.io/v1alpha2" -kind: metric -metadata: - name: requestcount - namespace: istio-system -spec: - value: "1" - dimensions: - reporter: conditional((context.reporter.kind | "inbound") == "outbound", "source", "destination") - source_workload: source.workload.name | "unknown" - source_workload_namespace: source.workload.namespace | "unknown" - source_principal: source.principal | "unknown" - source_app: source.labels["app"] | "unknown" - source_version: source.labels["version"] | "unknown" - destination_workload: destination.workload.name | "unknown" - destination_workload_namespace: destination.workload.namespace | "unknown" - destination_principal: destination.principal | "unknown" - destination_app: destination.labels["app"] | "unknown" - destination_version: destination.labels["version"] | "unknown" - destination_service: destination.service.host | "unknown" - destination_service_name: destination.service.name | "unknown" - destination_service_namespace: destination.service.namespace | "unknown" - request_protocol: api.protocol | context.protocol | "unknown" - response_code: response.code | 200 - connection_security_policy: conditional((context.reporter.kind | "inbound") == "outbound", "unknown", conditional(connection.mtls | false, "mutual_tls", "none")) - monitored_resource_type: '"UNSPECIFIED"' ---- -apiVersion: "config.istio.io/v1alpha2" -kind: metric -metadata: - name: requestduration - namespace: istio-system -spec: - value: response.duration | "0ms" - dimensions: - reporter: conditional((context.reporter.kind | "inbound") == "outbound", "source", "destination") - source_workload: source.workload.name | "unknown" - source_workload_namespace: source.workload.namespace | "unknown" - source_principal: source.principal | "unknown" - source_app: source.labels["app"] | "unknown" - source_version: source.labels["version"] | "unknown" - destination_workload: destination.workload.name | "unknown" - destination_workload_namespace: destination.workload.namespace | "unknown" - destination_principal: destination.principal | "unknown" - destination_app: destination.labels["app"] | "unknown" - destination_version: destination.labels["version"] | "unknown" - destination_service: destination.service.host | "unknown" - destination_service_name: destination.service.name | "unknown" - destination_service_namespace: destination.service.namespace | "unknown" - request_protocol: api.protocol | context.protocol | "unknown" - response_code: response.code | 200 - connection_security_policy: conditional((context.reporter.kind | "inbound") == "outbound", "unknown", conditional(connection.mtls | false, "mutual_tls", "none")) - monitored_resource_type: '"UNSPECIFIED"' ---- -apiVersion: "config.istio.io/v1alpha2" -kind: metric -metadata: - name: requestsize - namespace: istio-system -spec: - value: request.size | 0 - dimensions: - reporter: conditional((context.reporter.kind | "inbound") == "outbound", "source", "destination") - source_workload: source.workload.name | "unknown" - source_workload_namespace: source.workload.namespace | "unknown" - source_principal: source.principal | "unknown" - source_app: source.labels["app"] | "unknown" - source_version: source.labels["version"] | "unknown" - destination_workload: destination.workload.name | "unknown" - destination_workload_namespace: destination.workload.namespace | "unknown" - destination_principal: destination.principal | "unknown" - destination_app: destination.labels["app"] | "unknown" - destination_version: destination.labels["version"] | "unknown" - destination_service: destination.service.host | "unknown" - destination_service_name: destination.service.name | "unknown" - destination_service_namespace: destination.service.namespace | "unknown" - request_protocol: api.protocol | context.protocol | "unknown" - response_code: response.code | 200 - connection_security_policy: conditional((context.reporter.kind | "inbound") == "outbound", "unknown", conditional(connection.mtls | false, "mutual_tls", "none")) - monitored_resource_type: '"UNSPECIFIED"' ---- -apiVersion: "config.istio.io/v1alpha2" -kind: metric -metadata: - name: responsesize - namespace: istio-system -spec: - value: response.size | 0 - dimensions: - reporter: conditional((context.reporter.kind | "inbound") == "outbound", "source", "destination") - source_workload: source.workload.name | "unknown" - source_workload_namespace: source.workload.namespace | "unknown" - source_principal: source.principal | "unknown" - source_app: source.labels["app"] | "unknown" - source_version: source.labels["version"] | "unknown" - destination_workload: destination.workload.name | "unknown" - destination_workload_namespace: destination.workload.namespace | "unknown" - destination_principal: destination.principal | "unknown" - destination_app: destination.labels["app"] | "unknown" - destination_version: destination.labels["version"] | "unknown" - destination_service: destination.service.host | "unknown" - destination_service_name: destination.service.name | "unknown" - destination_service_namespace: destination.service.namespace | "unknown" - request_protocol: api.protocol | context.protocol | "unknown" - response_code: response.code | 200 - connection_security_policy: conditional((context.reporter.kind | "inbound") == "outbound", "unknown", conditional(connection.mtls | false, "mutual_tls", "none")) - monitored_resource_type: '"UNSPECIFIED"' ---- -apiVersion: "config.istio.io/v1alpha2" -kind: metric -metadata: - name: tcpbytesent - namespace: istio-system -spec: - value: connection.sent.bytes | 0 - dimensions: - reporter: conditional((context.reporter.kind | "inbound") == "outbound", "source", "destination") - source_workload: source.workload.name | "unknown" - source_workload_namespace: source.workload.namespace | "unknown" - source_principal: source.principal | "unknown" - source_app: source.labels["app"] | "unknown" - source_version: source.labels["version"] | "unknown" - destination_workload: destination.workload.name | "unknown" - destination_workload_namespace: destination.workload.namespace | "unknown" - destination_principal: destination.principal | "unknown" - destination_app: destination.labels["app"] | "unknown" - destination_version: destination.labels["version"] | "unknown" - destination_service: destination.service.name | "unknown" - destination_service_name: destination.service.name | "unknown" - destination_service_namespace: destination.service.namespace | "unknown" - connection_security_policy: conditional((context.reporter.kind | "inbound") == "outbound", "unknown", conditional(connection.mtls | false, "mutual_tls", "none")) - monitored_resource_type: '"UNSPECIFIED"' ---- -apiVersion: "config.istio.io/v1alpha2" -kind: metric -metadata: - name: tcpbytereceived - namespace: istio-system -spec: - value: connection.received.bytes | 0 - dimensions: - reporter: conditional((context.reporter.kind | "inbound") == "outbound", "source", "destination") - source_workload: source.workload.name | "unknown" - source_workload_namespace: source.workload.namespace | "unknown" - source_principal: source.principal | "unknown" - source_app: source.labels["app"] | "unknown" - source_version: source.labels["version"] | "unknown" - destination_workload: destination.workload.name | "unknown" - destination_workload_namespace: destination.workload.namespace | "unknown" - destination_principal: destination.principal | "unknown" - destination_app: destination.labels["app"] | "unknown" - destination_version: destination.labels["version"] | "unknown" - destination_service: destination.service.name | "unknown" - destination_service_name: destination.service.name | "unknown" - destination_service_namespace: destination.service.namespace | "unknown" - connection_security_policy: conditional((context.reporter.kind | "inbound") == "outbound", "unknown", conditional(connection.mtls | false, "mutual_tls", "none")) - monitored_resource_type: '"UNSPECIFIED"' ---- -apiVersion: "config.istio.io/v1alpha2" -kind: prometheus -metadata: - name: handler - namespace: istio-system -spec: - metrics: - - name: requests_total - instance_name: requestcount.metric.istio-system - kind: COUNTER - label_names: - - reporter - - source_app - - source_principal - - source_workload - - source_workload_namespace - - source_version - - destination_app - - destination_principal - - destination_workload - - destination_workload_namespace - - destination_version - - destination_service - - destination_service_name - - destination_service_namespace - - request_protocol - - response_code - - connection_security_policy - - name: request_duration_seconds - instance_name: requestduration.metric.istio-system - kind: DISTRIBUTION - label_names: - - reporter - - source_app - - source_principal - - source_workload - - source_workload_namespace - - source_version - - destination_app - - destination_principal - - destination_workload - - destination_workload_namespace - - destination_version - - destination_service - - destination_service_name - - destination_service_namespace - - request_protocol - - response_code - - connection_security_policy - buckets: - explicit_buckets: - bounds: [0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1, 2.5, 5, 10] - - name: request_bytes - instance_name: requestsize.metric.istio-system - kind: DISTRIBUTION - label_names: - - reporter - - source_app - - source_principal - - source_workload - - source_workload_namespace - - source_version - - destination_app - - destination_principal - - destination_workload - - destination_workload_namespace - - destination_version - - destination_service - - destination_service_name - - destination_service_namespace - - request_protocol - - response_code - - connection_security_policy - buckets: - exponentialBuckets: - numFiniteBuckets: 8 - scale: 1 - growthFactor: 10 - - name: response_bytes - instance_name: responsesize.metric.istio-system - kind: DISTRIBUTION - label_names: - - reporter - - source_app - - source_principal - - source_workload - - source_workload_namespace - - source_version - - destination_app - - destination_principal - - destination_workload - - destination_workload_namespace - - destination_version - - destination_service - - destination_service_name - - destination_service_namespace - - request_protocol - - response_code - - connection_security_policy - buckets: - exponentialBuckets: - numFiniteBuckets: 8 - scale: 1 - growthFactor: 10 - - name: tcp_sent_bytes_total - instance_name: tcpbytesent.metric.istio-system - kind: COUNTER - label_names: - - reporter - - source_app - - source_principal - - source_workload - - source_workload_namespace - - source_version - - destination_app - - destination_principal - - destination_workload - - destination_workload_namespace - - destination_version - - destination_service - - destination_service_name - - destination_service_namespace - - connection_security_policy - - name: tcp_received_bytes_total - instance_name: tcpbytereceived.metric.istio-system - kind: COUNTER - label_names: - - reporter - - source_app - - source_principal - - source_workload - - source_workload_namespace - - source_version - - destination_app - - destination_principal - - destination_workload - - destination_workload_namespace - - destination_version - - destination_service - - destination_service_name - - destination_service_namespace - - connection_security_policy ---- -apiVersion: "config.istio.io/v1alpha2" -kind: rule -metadata: - name: promhttp - namespace: istio-system -spec: - match: context.protocol == "http" || context.protocol == "grpc" - actions: - - handler: handler.prometheus - instances: - - requestcount.metric - - requestduration.metric - - requestsize.metric - - responsesize.metric ---- -apiVersion: "config.istio.io/v1alpha2" -kind: rule -metadata: - name: promtcp - namespace: istio-system -spec: - match: context.protocol == "tcp" - actions: - - handler: handler.prometheus - instances: - - tcpbytesent.metric - - tcpbytereceived.metric ---- - -apiVersion: "config.istio.io/v1alpha2" -kind: kubernetesenv -metadata: - name: handler - namespace: istio-system -spec: - # when running from mixer root, use the following config after adding a - # symbolic link to a kubernetes config file via: - # - # $ ln -s ~/.kube/config mixer/adapter/kubernetes/kubeconfig - # - # kubeconfig_path: "mixer/adapter/kubernetes/kubeconfig" - ---- -apiVersion: "config.istio.io/v1alpha2" -kind: rule -metadata: - name: kubeattrgenrulerule - namespace: istio-system -spec: - actions: - - handler: handler.kubernetesenv - instances: - - attributes.kubernetes ---- -apiVersion: "config.istio.io/v1alpha2" -kind: rule -metadata: - name: tcpkubeattrgenrulerule - namespace: istio-system -spec: - match: context.protocol == "tcp" - actions: - - handler: handler.kubernetesenv - instances: - - attributes.kubernetes ---- -apiVersion: "config.istio.io/v1alpha2" -kind: kubernetes -metadata: - name: attributes - namespace: istio-system -spec: - # Pass the required attribute data to the adapter - source_uid: source.uid | "" - source_ip: source.ip | ip("0.0.0.0") # default to unspecified ip addr - destination_uid: destination.uid | "" - destination_port: destination.port | 0 - attribute_bindings: - # Fill the new attributes from the adapter produced output. - # $out refers to an instance of OutputTemplate message - source.ip: $out.source_pod_ip | ip("0.0.0.0") - source.uid: $out.source_pod_uid | "unknown" - source.labels: $out.source_labels | emptyStringMap() - source.name: $out.source_pod_name | "unknown" - source.namespace: $out.source_namespace | "default" - source.owner: $out.source_owner | "unknown" - source.serviceAccount: $out.source_service_account_name | "unknown" - source.workload.uid: $out.source_workload_uid | "unknown" - source.workload.name: $out.source_workload_name | "unknown" - source.workload.namespace: $out.source_workload_namespace | "unknown" - destination.ip: $out.destination_pod_ip | ip("0.0.0.0") - destination.uid: $out.destination_pod_uid | "unknown" - destination.labels: $out.destination_labels | emptyStringMap() - destination.name: $out.destination_pod_name | "unknown" - destination.container.name: $out.destination_container_name | "unknown" - destination.namespace: $out.destination_namespace | "default" - destination.owner: $out.destination_owner | "unknown" - destination.serviceAccount: $out.destination_service_account_name | "unknown" - destination.workload.uid: $out.destination_workload_uid | "unknown" - destination.workload.name: $out.destination_workload_name | "unknown" - destination.workload.namespace: $out.destination_workload_namespace | "unknown" - ---- -# Configuration needed by Mixer. -# Mixer cluster is delivered via CDS -# Specify mixer cluster settings -apiVersion: networking.istio.io/v1alpha3 -kind: DestinationRule -metadata: - name: istio-policy - namespace: istio-system -spec: - host: istio-policy.istio-system.svc.cluster.local - trafficPolicy: - connectionPool: - http: - http2MaxRequests: 10000 - maxRequestsPerConnection: 10000 ---- -apiVersion: networking.istio.io/v1alpha3 -kind: DestinationRule -metadata: - name: istio-telemetry - namespace: istio-system -spec: - host: istio-telemetry.istio-system.svc.cluster.local - trafficPolicy: - connectionPool: - http: - http2MaxRequests: 10000 - maxRequestsPerConnection: 10000 ---- - diff --git a/k8s-deployment-strategies.md b/k8s-deployment-strategies.md index 5242691..37a5a1d 100644 --- a/k8s-deployment-strategies.md +++ b/k8s-deployment-strategies.md @@ -40,7 +40,7 @@ 8. rollback: deploy virtualservice/my-app -> svc/my-app-v1 9. confirmed: remove deploy/my-app-v1 svc/my-app-v1 -# shadow deploy istio mode +# shadow deploy istio mode 1. create deploy/my-app-v1 2. create svc/my-app-v1 -> deploy/my-app-v1 diff --git a/keepalived/check_apiserver.sh b/keepalived/check_apiserver.sh deleted file mode 100755 index 3ceb7a8..0000000 --- a/keepalived/check_apiserver.sh +++ /dev/null @@ -1,24 +0,0 @@ -#!/bin/bash - -# if check error then repeat check for 12 times, else exit -err=0 -for k in $(seq 1 12) -do - check_code=$(ps -ef | grep kube-apiserver | grep -v color | grep -v grep | wc -l) - if [[ $check_code == "0" ]]; then - err=$(expr $err + 1) - sleep 5 - continue - else - err=0 - break - fi -done - -if [[ $err != "0" ]]; then - echo "systemctl stop keepalived" - /usr/bin/systemctl stop keepalived - exit 1 -else - exit 0 -fi diff --git a/keepalived/keepalived.conf.tpl b/keepalived/keepalived.conf.tpl deleted file mode 100644 index 5178302..0000000 --- a/keepalived/keepalived.conf.tpl +++ /dev/null @@ -1,29 +0,0 @@ -! Configuration File for keepalived -global_defs { - router_id LVS_DEVEL -} -vrrp_script chk_apiserver { - script "/etc/keepalived/check_apiserver.sh" - interval 2 - weight -5 - fall 3 - rise 2 -} -vrrp_instance VI_1 { - state K8SHA_KA_STATE - interface K8SHA_KA_INTF - mcast_src_ip K8SHA_IPLOCAL - virtual_router_id 51 - priority K8SHA_KA_PRIO - advert_int 2 - authentication { - auth_type PASS - auth_pass K8SHA_KA_AUTH - } - virtual_ipaddress { - K8SHA_VIP - } - track_script { - chk_apiserver - } -} diff --git a/metrics-server-0.3.7/components.yaml b/metrics-server-0.3.7/components.yaml new file mode 100644 index 0000000..3a84719 --- /dev/null +++ b/metrics-server-0.3.7/components.yaml @@ -0,0 +1,163 @@ +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: system:aggregated-metrics-reader + labels: + rbac.authorization.k8s.io/aggregate-to-view: "true" + rbac.authorization.k8s.io/aggregate-to-edit: "true" + rbac.authorization.k8s.io/aggregate-to-admin: "true" +rules: +- apiGroups: ["metrics.k8s.io"] + resources: ["pods", "nodes"] + verbs: ["get", "list", "watch"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: metrics-server:system:auth-delegator +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: system:auth-delegator +subjects: +- kind: ServiceAccount + name: metrics-server + namespace: kube-system +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: metrics-server-auth-reader + namespace: kube-system +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: extension-apiserver-authentication-reader +subjects: +- kind: ServiceAccount + name: metrics-server + namespace: kube-system +--- +apiVersion: apiregistration.k8s.io/v1beta1 +#apiVersion: apiregistration.k8s.io/v1 +kind: APIService +metadata: + name: v1beta1.metrics.k8s.io +spec: + service: + name: metrics-server + namespace: kube-system + group: metrics.k8s.io + version: v1beta1 + insecureSkipTLSVerify: true + groupPriorityMinimum: 100 + versionPriority: 100 +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: metrics-server + namespace: kube-system +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: metrics-server + namespace: kube-system + labels: + k8s-app: metrics-server +spec: + selector: + matchLabels: + k8s-app: metrics-server + template: + metadata: + name: metrics-server + labels: + k8s-app: metrics-server + spec: + serviceAccountName: metrics-server + volumes: + # mount in tmp so we can safely use from-scratch images and/or read-only containers + - name: tmp-dir + emptyDir: {} + - name: ca-ssl + hostPath: + path: /etc/kubernetes/pki + containers: + - name: metrics-server + image: dotbalo/metrics-server:0.3.7 + imagePullPolicy: IfNotPresent + args: + - --cert-dir=/tmp + - --secure-port=4443 + - --metric-resolution=30s + - --kubelet-insecure-tls + - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname + - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem # change to front-proxy-ca.crt for kubeadm + - --requestheader-username-headers=X-Remote-User + - --requestheader-group-headers=X-Remote-Group + - --requestheader-extra-headers-prefix=X-Remote-Extra- + ports: + - name: main-port + containerPort: 4443 + protocol: TCP + securityContext: + readOnlyRootFilesystem: true + runAsNonRoot: true + runAsUser: 1000 + volumeMounts: + - name: tmp-dir + mountPath: /tmp + - name: ca-ssl + mountPath: /etc/kubernetes/pki + nodeSelector: + kubernetes.io/os: linux +--- +apiVersion: v1 +kind: Service +metadata: + name: metrics-server + namespace: kube-system + labels: + kubernetes.io/name: "Metrics-server" + kubernetes.io/cluster-service: "true" +spec: + selector: + k8s-app: metrics-server + ports: + - port: 443 + protocol: TCP + targetPort: main-port +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: system:metrics-server +rules: +- apiGroups: + - "" + resources: + - pods + - nodes + - nodes/stats + - namespaces + - configmaps + verbs: + - get + - list + - watch +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: system:metrics-server +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: system:metrics-server +subjects: +- kind: ServiceAccount + name: metrics-server + namespace: kube-system diff --git a/metrics-server-3.6.1/aggregated-metrics-reader.yaml b/metrics-server-3.6.1/aggregated-metrics-reader.yaml new file mode 100644 index 0000000..0a0e159 --- /dev/null +++ b/metrics-server-3.6.1/aggregated-metrics-reader.yaml @@ -0,0 +1,13 @@ +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: system:aggregated-metrics-reader + labels: + rbac.authorization.k8s.io/aggregate-to-view: "true" + rbac.authorization.k8s.io/aggregate-to-edit: "true" + rbac.authorization.k8s.io/aggregate-to-admin: "true" +rules: +- apiGroups: ["metrics.k8s.io"] + resources: ["pods", "nodes"] + verbs: ["get", "list", "watch"] diff --git a/metrics-server/auth-delegator.yaml b/metrics-server-3.6.1/auth-delegator.yaml similarity index 85% rename from metrics-server/auth-delegator.yaml rename to metrics-server-3.6.1/auth-delegator.yaml index e3442c5..87909da 100644 --- a/metrics-server/auth-delegator.yaml +++ b/metrics-server-3.6.1/auth-delegator.yaml @@ -1,5 +1,5 @@ --- -apiVersion: rbac.authorization.k8s.io/v1beta1 +apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: metrics-server:system:auth-delegator diff --git a/metrics-server/auth-reader.yaml b/metrics-server-3.6.1/auth-reader.yaml similarity index 86% rename from metrics-server/auth-reader.yaml rename to metrics-server-3.6.1/auth-reader.yaml index f0616e1..062afa8 100644 --- a/metrics-server/auth-reader.yaml +++ b/metrics-server-3.6.1/auth-reader.yaml @@ -1,5 +1,5 @@ --- -apiVersion: rbac.authorization.k8s.io/v1beta1 +apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: metrics-server-auth-reader diff --git a/metrics-server/metrics-apiservice.yaml b/metrics-server-3.6.1/metrics-apiservice.yaml similarity index 100% rename from metrics-server/metrics-apiservice.yaml rename to metrics-server-3.6.1/metrics-apiservice.yaml diff --git a/metrics-server-3.6.1/metrics-server-deployment.yaml b/metrics-server-3.6.1/metrics-server-deployment.yaml new file mode 100644 index 0000000..fd1db9b --- /dev/null +++ b/metrics-server-3.6.1/metrics-server-deployment.yaml @@ -0,0 +1,62 @@ +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: metrics-server + namespace: kube-system +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: metrics-server + namespace: kube-system + labels: + k8s-app: metrics-server +spec: + selector: + matchLabels: + k8s-app: metrics-server + template: + metadata: + name: metrics-server + labels: + k8s-app: metrics-server + spec: + serviceAccountName: metrics-server + volumes: + # mount in tmp so we can safely use from-scratch images and/or read-only containers + - name: tmp-dir + emptyDir: {} + - name: ca-ssl + hostPath: + path: /etc/kubernetes/pki + containers: + - name: metrics-server + image: registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server-amd64:v0.3.6 + args: + - --cert-dir=/tmp + - --secure-port=4443 + - --metric-resolution=30s + - --kubelet-insecure-tls + - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname + - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem # change to front-proxy-ca.crt for kubeadm + - --requestheader-username-headers=X-Remote-User + - --requestheader-group-headers=X-Remote-Group + - --requestheader-extra-headers-prefix=X-Remote-Extra- + ports: + - name: main-port + containerPort: 4443 + protocol: TCP + securityContext: + readOnlyRootFilesystem: true + runAsNonRoot: true + runAsUser: 1000 + imagePullPolicy: IfNotPresent + volumeMounts: + - name: tmp-dir + mountPath: /tmp + - mountPath: /etc/kubernetes/pki + name: ca-ssl + nodeSelector: + beta.kubernetes.io/os: linux + kubernetes.io/arch: "amd64" diff --git a/metrics-server/metrics-server-service.yaml b/metrics-server-3.6.1/metrics-server-service.yaml similarity index 77% rename from metrics-server/metrics-server-service.yaml rename to metrics-server-3.6.1/metrics-server-service.yaml index 082b00c..db9a622 100644 --- a/metrics-server/metrics-server-service.yaml +++ b/metrics-server-3.6.1/metrics-server-service.yaml @@ -6,10 +6,11 @@ metadata: namespace: kube-system labels: kubernetes.io/name: "Metrics-server" + kubernetes.io/cluster-service: "true" spec: selector: k8s-app: metrics-server ports: - port: 443 protocol: TCP - targetPort: 443 + targetPort: main-port diff --git a/metrics-server/resource-reader.yaml b/metrics-server-3.6.1/resource-reader.yaml similarity index 84% rename from metrics-server/resource-reader.yaml rename to metrics-server-3.6.1/resource-reader.yaml index 34294a3..52cf808 100644 --- a/metrics-server/resource-reader.yaml +++ b/metrics-server-3.6.1/resource-reader.yaml @@ -11,14 +11,7 @@ rules: - nodes - nodes/stats - namespaces - verbs: - - get - - list - - watch -- apiGroups: - - "extensions" - resources: - - deployments + - configmaps verbs: - get - list diff --git a/metrics-server/metrics-server-deployment.yaml b/metrics-server/metrics-server-deployment.yaml deleted file mode 100644 index 52120ba..0000000 --- a/metrics-server/metrics-server-deployment.yaml +++ /dev/null @@ -1,49 +0,0 @@ ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: metrics-server - namespace: kube-system ---- -apiVersion: extensions/v1beta1 -kind: Deployment -metadata: - name: metrics-server - namespace: kube-system - labels: - k8s-app: metrics-server -spec: - selector: - matchLabels: - k8s-app: metrics-server - template: - metadata: - name: metrics-server - labels: - k8s-app: metrics-server - spec: - volumes: - - name: timezone - hostPath: - path: /etc/timezone - type: File - - name: localtime - hostPath: - path: /usr/share/zoneinfo/Asia/Shanghai - type: File - serviceAccountName: metrics-server - containers: - - name: metrics-server - image: gcr.io/google_containers/metrics-server-amd64:v0.2.1 - imagePullPolicy: IfNotPresent - command: - - /metrics-server - # - --source=kubernetes.summary_api:'' - # 10255 readonly端口已经作废 - - --source=kubernetes.summary_api:https://kubernetes.default?kubeletHttps=true&kubeletPort=10250&insecure=true - - --metric-resolution=30s - volumeMounts: - - name: timezone - mountPath: "/etc/timezone" - - name: localtime - mountPath: "/etc/localtime" diff --git a/nginx-lb/docker-compose.yaml b/nginx-lb/docker-compose.yaml deleted file mode 100644 index 72048d7..0000000 --- a/nginx-lb/docker-compose.yaml +++ /dev/null @@ -1,11 +0,0 @@ -version: '2' -services: - etcd: - image: nginx:latest - container_name: nginx-lb - hostname: nginx-lb - volumes: - - ./nginx-lb.conf:/etc/nginx/nginx.conf - ports: - - 16443:16443 - restart: always diff --git a/nginx-lb/nginx-lb.conf.tpl b/nginx-lb/nginx-lb.conf.tpl deleted file mode 100644 index 5367e91..0000000 --- a/nginx-lb/nginx-lb.conf.tpl +++ /dev/null @@ -1,46 +0,0 @@ -user nginx; -worker_processes 1; - -error_log /var/log/nginx/error.log warn; -pid /var/run/nginx.pid; - - -events { - worker_connections 1024; -} - - -http { - include /etc/nginx/mime.types; - default_type application/octet-stream; - - log_format main '$remote_addr - $remote_user [$time_local] "$request" ' - '$status $body_bytes_sent "$http_referer" ' - '"$http_user_agent" "$http_x_forwarded_for"'; - - access_log /var/log/nginx/access.log main; - - sendfile on; - #tcp_nopush on; - - keepalive_timeout 65; - - #gzip on; - - include /etc/nginx/conf.d/*.conf; -} - -stream { - upstream apiserver { - server K8SHA_IP1:6443 weight=5 max_fails=3 fail_timeout=30s; - server K8SHA_IP2:6443 weight=5 max_fails=3 fail_timeout=30s; - server K8SHA_IP3:6443 weight=5 max_fails=3 fail_timeout=30s; - } - - server { - listen 16443; - proxy_connect_timeout 1s; - proxy_timeout 3s; - proxy_pass apiserver; - } -} diff --git a/prometheus/README.md b/prometheus/README.md deleted file mode 100644 index 647523d..0000000 --- a/prometheus/README.md +++ /dev/null @@ -1,4 +0,0 @@ -# 先创建文件夹和设置权限 - -mkdir -p /mnt/mycephfs/k8s-deploy/kube-system/prometheus -chown -R 65534:65534 /mnt/mycephfs/k8s-deploy/kube-system/prometheus diff --git a/prometheus/cluster-role.yaml b/prometheus/cluster-role.yaml deleted file mode 100644 index 153ec51..0000000 --- a/prometheus/cluster-role.yaml +++ /dev/null @@ -1,33 +0,0 @@ -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRole -metadata: - name: prometheus -rules: -- apiGroups: [""] - resources: - - nodes - - nodes/proxy - - services - - endpoints - - pods - verbs: ["get", "list", "watch"] -- apiGroups: - - extensions - resources: - - ingresses - verbs: ["get", "list", "watch"] -- nonResourceURLs: ["/metrics"] - verbs: ["get"] ---- -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRoleBinding -metadata: - name: prometheus -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: prometheus -subjects: -- kind: ServiceAccount - name: default - namespace: kube-system diff --git a/prometheus/config-map.yaml b/prometheus/config-map.yaml deleted file mode 100644 index 770910d..0000000 --- a/prometheus/config-map.yaml +++ /dev/null @@ -1,257 +0,0 @@ -apiVersion: v1 -kind: ConfigMap -metadata: - name: prometheus-server-conf - labels: - name: prometheus-server-conf - namespace: kube-system -data: - prometheus.yml: |- - # A scrape configuration for running Prometheus on a Kubernetes cluster. - # This uses separate scrape configs for cluster components (i.e. API server, node) - # and services to allow each to use different authentication configs. - # - # Kubernetes labels will be added as Prometheus labels on metrics via the - # `labelmap` relabeling action. - # - # If you are using Kubernetes 1.7.2 or earlier, please take note of the comments - # for the kubernetes-cadvisor job; you will need to edit or remove this job. - - # Scrape config for API servers. - # - # Kubernetes exposes API servers as endpoints to the default/kubernetes - # service so this uses `endpoints` role and uses relabelling to only keep - # the endpoints associated with the default/kubernetes service using the - # default named port `https`. This works for single API server deployments as - # well as HA API server deployments. - scrape_configs: - - job_name: 'kubernetes-apiservers' - - kubernetes_sd_configs: - - role: endpoints - - # Default to scraping over https. If required, just disable this or change to - # `http`. - scheme: https - - # This TLS & bearer token file config is used to connect to the actual scrape - # endpoints for cluster components. This is separate to discovery auth - # configuration because discovery & scraping are two separate concerns in - # Prometheus. The discovery auth config is automatic if Prometheus runs inside - # the cluster. Otherwise, more config options have to be provided within the - # . - tls_config: - ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt - # If your node certificates are self-signed or use a different CA to the - # master CA, then disable certificate verification below. Note that - # certificate verification is an integral part of a secure infrastructure - # so this should only be disabled in a controlled environment. You can - # disable certificate verification by uncommenting the line below. - # - # insecure_skip_verify: true - bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token - - # Keep only the default/kubernetes service endpoints for the https port. This - # will add targets for each API server which Kubernetes adds an endpoint to - # the default/kubernetes service. - relabel_configs: - - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name] - action: keep - regex: default;kubernetes;https - - # Scrape config for nodes (kubelet). - # - # Rather than connecting directly to the node, the scrape is proxied though the - # Kubernetes apiserver. This means it will work if Prometheus is running out of - # cluster, or can't connect to nodes for some other reason (e.g. because of - # firewalling). - - job_name: 'kubernetes-nodes' - - # Default to scraping over https. If required, just disable this or change to - # `http`. - scheme: https - - # This TLS & bearer token file config is used to connect to the actual scrape - # endpoints for cluster components. This is separate to discovery auth - # configuration because discovery & scraping are two separate concerns in - # Prometheus. The discovery auth config is automatic if Prometheus runs inside - # the cluster. Otherwise, more config options have to be provided within the - # . - tls_config: - ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt - bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token - - kubernetes_sd_configs: - - role: node - - relabel_configs: - - action: labelmap - regex: __meta_kubernetes_node_label_(.+) - - target_label: __address__ - replacement: kubernetes.default.svc:443 - - source_labels: [__meta_kubernetes_node_name] - regex: (.+) - target_label: __metrics_path__ - replacement: /api/v1/nodes/${1}/proxy/metrics - - # Scrape config for Kubelet cAdvisor. - # - # This is required for Kubernetes 1.7.3 and later, where cAdvisor metrics - # (those whose names begin with 'container_') have been removed from the - # Kubelet metrics endpoint. This job scrapes the cAdvisor endpoint to - # retrieve those metrics. - # - # In Kubernetes 1.7.0-1.7.2, these metrics are only exposed on the cAdvisor - # HTTP endpoint; use "replacement: /api/v1/nodes/${1}:4194/proxy/metrics" - # in that case (and ensure cAdvisor's HTTP server hasn't been disabled with - # the --cadvisor-port=0 Kubelet flag). - # - # This job is not necessary and should be removed in Kubernetes 1.6 and - # earlier versions, or it will cause the metrics to be scraped twice. - - job_name: 'kubernetes-cadvisor' - - # Default to scraping over https. If required, just disable this or change to - # `http`. - scheme: https - - # This TLS & bearer token file config is used to connect to the actual scrape - # endpoints for cluster components. This is separate to discovery auth - # configuration because discovery & scraping are two separate concerns in - # Prometheus. The discovery auth config is automatic if Prometheus runs inside - # the cluster. Otherwise, more config options have to be provided within the - # . - tls_config: - ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt - bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token - - kubernetes_sd_configs: - - role: node - - relabel_configs: - - action: labelmap - regex: __meta_kubernetes_node_label_(.+) - - target_label: __address__ - replacement: kubernetes.default.svc:443 - - source_labels: [__meta_kubernetes_node_name] - regex: (.+) - target_label: __metrics_path__ - replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor - - # Example scrape config for service endpoints. - # - # The relabeling allows the actual service scrape endpoint to be configured - # for all or only some endpoints. - - job_name: 'kubernetes-service-endpoints' - - kubernetes_sd_configs: - - role: endpoints - - relabel_configs: - # Example relabel to scrape only endpoints that have - # "example.io/should_be_scraped = true" annotation. - # - source_labels: [__meta_kubernetes_service_annotation_example_io_should_be_scraped] - # action: keep - # regex: true - # - # Example relabel to customize metric path based on endpoints - # "example.io/metric_path = " annotation. - # - source_labels: [__meta_kubernetes_service_annotation_example_io_metric_path] - # action: replace - # target_label: __metrics_path__ - # regex: (.+) - # - # Example relabel to scrape only single, desired port for the service based - # on endpoints "example.io/scrape_port = " annotation. - # - source_labels: [__address__, __meta_kubernetes_service_annotation_example_io_scrape_port] - # action: replace - # regex: ([^:]+)(?::\d+)?;(\d+) - # replacement: $1:$2 - # target_label: __address__ - # - # Example relabel to configure scrape scheme for all service scrape targets - # based on endpoints "example.io/scrape_scheme = " annotation. - # - source_labels: [__meta_kubernetes_service_annotation_example_io_scrape_scheme] - # action: replace - # target_label: __scheme__ - # regex: (https?) - - action: labelmap - regex: __meta_kubernetes_service_label_(.+) - - source_labels: [__meta_kubernetes_namespace] - action: replace - target_label: kubernetes_namespace - - source_labels: [__meta_kubernetes_service_name] - action: replace - target_label: kubernetes_name - - # Example scrape config for pods - # - # The relabeling allows the actual pod scrape to be configured - # for all the declared ports (or port-free target if none is declared) - # or only some ports. - - job_name: 'kubernetes-pods' - - kubernetes_sd_configs: - - role: pod - - relabel_configs: - # Example relabel to scrape only pods that have - # "example.io/should_be_scraped = true" annotation. - # - source_labels: [__meta_kubernetes_pod_annotation_example_io_should_be_scraped] - # action: keep - # regex: true - # - # Example relabel to customize metric path based on pod - # "example.io/metric_path = " annotation. - # - source_labels: [__meta_kubernetes_pod_annotation_example_io_metric_path] - # action: replace - # target_label: __metrics_path__ - # regex: (.+) - # - # Example relabel to scrape only single, desired port for the pod - # based on pod "example.io/scrape_port = " annotation. - # Note that __address__ is modified here, so if pod containers' ports - # are declared, they all will be ignored. - # - source_labels: [__address__, __meta_kubernetes_pod_annotation_example_io_scrape_port] - # action: replace - # regex: ([^:]+)(?::\d+)?;(\d+) - # replacement: $1:$2 - # target_label: __address__ - - action: labelmap - regex: __meta_kubernetes_pod_label_(.+) - - source_labels: [__meta_kubernetes_namespace] - action: replace - target_label: kubernetes_namespace - - source_labels: [__meta_kubernetes_pod_name] - action: replace - target_label: kubernetes_pod_name - - job_name: kubernetes-nodes-cadvisor - scrape_interval: 10s - scrape_timeout: 10s - scheme: https # remove if you want to scrape metrics on insecure port - tls_config: - ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt - bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token - kubernetes_sd_configs: - - role: node - relabel_configs: - - action: labelmap - regex: __meta_kubernetes_node_label_(.+) - # Only for Kubernetes ^1.7.3. - # See: https://github.com/prometheus/prometheus/issues/2916 - - target_label: __address__ - replacement: kubernetes.default.svc:443 - - source_labels: [__meta_kubernetes_node_name] - regex: (.+) - target_label: __metrics_path__ - replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor - metric_relabel_configs: - - action: replace - source_labels: [id] - regex: '^/machine\.slice/machine-rkt\\x2d([^\\]+)\\.+/([^/]+)\.service$' - target_label: rkt_container_name - replacement: '${2}-${1}' - - action: replace - source_labels: [id] - regex: '^/system\.slice/(.+)\.service$' - target_label: systemd_service_name - replacement: '${1}' diff --git a/prometheus/prometheus-deployment.yaml b/prometheus/prometheus-deployment.yaml deleted file mode 100644 index 1443704..0000000 --- a/prometheus/prometheus-deployment.yaml +++ /dev/null @@ -1,34 +0,0 @@ -apiVersion: extensions/v1beta1 -kind: Deployment -metadata: - name: prometheus - namespace: kube-system -spec: - replicas: 1 - template: - metadata: - labels: - app: prometheus - spec: - nodeSelector: - node-role.kubernetes.io/master: "" - containers: - - name: prometheus - image: prom/prometheus:v2.3.1 - args: - - "--config.file=/etc/prometheus/prometheus.yml" - - "--storage.tsdb.path=/prometheus/" - ports: - - containerPort: 9090 - volumeMounts: - - name: prometheus-config-volume - mountPath: /etc/prometheus/ - - name: prometheus-storage - mountPath: /prometheus/ - volumes: - - name: prometheus-config-volume - configMap: - defaultMode: 420 - name: prometheus-server-conf - - name: prometheus-storage - emptyDir: {} diff --git a/prometheus/prometheus-service.yaml b/prometheus/prometheus-service.yaml deleted file mode 100644 index 911bc16..0000000 --- a/prometheus/prometheus-service.yaml +++ /dev/null @@ -1,13 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: prometheus - namespace: kube-system -spec: - selector: - app: prometheus - type: NodePort - ports: - - port: 8080 - targetPort: 9090 - nodePort: 30013 diff --git a/tls.crt b/tls.crt deleted file mode 100644 index cca68ed..0000000 --- a/tls.crt +++ /dev/null @@ -1,19 +0,0 @@ ------BEGIN CERTIFICATE----- -MIIDAzCCAeugAwIBAgIJAIoRNa+HktZfMA0GCSqGSIb3DQEBCwUAMBgxFjAUBgNV -BAMMDWs4cy1tYXN0ZXItbGIwHhcNMTgxMTMwMDc1ODE3WhcNMjgxMTI3MDc1ODE3 -WjAYMRYwFAYDVQQDDA1rOHMtbWFzdGVyLWxiMIIBIjANBgkqhkiG9w0BAQEFAAOC -AQ8AMIIBCgKCAQEAwLim3+JONacZ5wTx6uv1lKysoAzYfnGQ3yfPljDygCbFhuzr -Vfbtw4E66otdsvJgn9vmPbdSDecrvFnKAUJpSX9D+AbAXDnskT+HEhhgfGrZtcY5 -5iMlSfXX3+pS71rEOSpenDXyDE5TzTBAF+8W6hGsBPHPDkGxIgd1VVbeFc0/HMkN -vETnuIoK077RHtrE+XZ2yAl0IChGYBsTE6vZ2QjigBhJMw810rgD/ZmA2zRnoiCw -ERxsmxOyRCnm1HFQW5CgiyX+dGuep1+3vsdve3TPJ7KUBtGKnhkEPDfXUWZ+AR+l -WjVJGFxIAZEpvHMH+j2sEMuGN3Q0mdJ5FlNabwIDAQABo1AwTjAdBgNVHQ4EFgQU -DTI8VutqShoJlqFUNstODILiGuowHwYDVR0jBBgwFoAUDTI8VutqShoJlqFUNstO -DILiGuowDAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEAB5oDA2O/MvqQ -h8kyhllZ7hFsgGkUOgcHnftL4HdYEOy7NUoqAoUaj41cNstYNin6hBp1AYA9vz7k -l/Kjj8m3GQeQFJKrw8W4hrsjeo5GPZSYPo5f/Aw+BTXL8TYdWQn86iiSqAT40pd/ -staBqzjAYih0/dgZg2xJ4o8h5rnkX/KR2vrDkLm6JwRnWHrP7vSjgnSnVXRcumSE -qkWn7eTNFwTLJTdC1ilaG7EfP+hYKj7y3MRn/xM+ugHU6rfOMXkf7t1tLt/bRz8E -FS6hV00Re/4cuIn5HnXPpSLgU7GJwUAjfIShh1DanTMRaNDBYCKse8M2g6FdQTfZ -bmzaffhI1g== ------END CERTIFICATE----- diff --git a/tls.key b/tls.key deleted file mode 100644 index 6d29c82..0000000 --- a/tls.key +++ /dev/null @@ -1,28 +0,0 @@ ------BEGIN PRIVATE KEY----- -MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQDAuKbf4k41pxnn -BPHq6/WUrKygDNh+cZDfJ8+WMPKAJsWG7OtV9u3DgTrqi12y8mCf2+Y9t1IN5yu8 -WcoBQmlJf0P4BsBcOeyRP4cSGGB8atm1xjnmIyVJ9dff6lLvWsQ5Kl6cNfIMTlPN -MEAX7xbqEawE8c8OQbEiB3VVVt4VzT8cyQ28ROe4igrTvtEe2sT5dnbICXQgKEZg -GxMTq9nZCOKAGEkzDzXSuAP9mYDbNGeiILARHGybE7JEKebUcVBbkKCLJf50a56n -X7e+x297dM8nspQG0YqeGQQ8N9dRZn4BH6VaNUkYXEgBkSm8cwf6PawQy4Y3dDSZ -0nkWU1pvAgMBAAECggEBALrGelwCfJ+86fqeLULrGd/UFZ0rtemdcLUFZUb++xa9 -/LOOC2oN3VKbfRjwpoeWJZToTlTDxQ9aWmW5c3ATB+1GHP5UtLrtHFuMgQBFhcUu -3P4xNc3Xg/0Q+P22oFf+1Ks+Z+Dm20WX59m1iHhprAB/zgIgw/XiLqR3K/zgKm5f -BFkkMCjptYxaR6r456C7yRi2nIHcMCcrYNWWYergZGYU9T7DMu2ATzCsnvsfUUS6 -V/EpHT/1RyPd8uQCsqWXec6+fLYAKx5FXLKvTqqsuSi2A83A2kY3HpcbSwXHLCM6 -B6LLMJPPuMhTRg02imttGkFkFjQiUWnlvuCv7PJ10NECgYEA6Wsw4im9bBhhC6Ac -wuzpczbjM4gjUihW+ntgks0suUMcn4VvxCQ9mAiTKCMO6u7VtG+Pe4NCaXtjQtAd -wRWzyuZYrZUHBXfJIvPtJYKuhOpyQjzDXPYZgaht+8S5EPV1JS35ozqHPwfA4fHT -Yz98xr1iVyoNyiBJVhxyjy3WWdcCgYEA012OL3IA1jIbWXDGKaUBRNgckwpWmdbI -/U9iR0tmiCP3RIpCv/D5yC/eG6d5ILX8w1yskB6lkw3OKV7Q8rzE1cw+r7GWCt5d -BmRMuyclHg4NiBgWAkwSMvT9zs43Ml45LxTHTRV0AVGCuegs040fT61bHh4GqUNR -QLyN4i8f4SkCgYB+MTRJYTWGRhvZNCO4gmqnnknw5y3pUePMIX2RgBkow46q83H8 -QXeHRUOBlIqRGrQwi4uvw8PY0RtV2LvtUnVUQXo5xfL40szL98IC4IbHVxSUmNMp -4+bgQRXM4osHDxzZD+UBiTfrLJ7ryFh3NLCZpXOQGi1AVHoxcsnAfJCBGwKBgQCa -VDk5U1hhDX0CtWE7jwt6JQHYKzhIY5elvYzY2aknxnsJRJqwY1c+YBUgxAuhYsAI -NWaaZIYo9W+OrXiLhGGEaflrd5NCpFHwFNQh4tcrNr+Sm2OWkczIADJCCjgrQrkm -M1nCYuOtAsMc0vXIEcbG+qEJQItEk66EQiim+hmg4QKBgDIqy62XRty68/+kLvFp -t0RzTu9NuISzKafRYoOWp4ZugiDDHimmqZDoFqZubfB0aIioYFk1YTwSpDK4wwT1 -LewD42EnaQAXga2oIY5P+9kPo2ph5cKBpE+nHWzAnY6PayMPZzU9rOHutiUzpS/Q -N33P3eJY288DN+G/M9mFOgjQ ------END PRIVATE KEY----- diff --git a/traefik/README.md b/traefik/README.md deleted file mode 100644 index 25fb230..0000000 --- a/traefik/README.md +++ /dev/null @@ -1,1102 +0,0 @@ -# kubeadm-highavailiability - 基于kubeadm的kubernetes高可用集群部署,支持v1.11.x v1.9.x v1.7.x v1.6.x版本 - -![k8s logo](images/Kubernetes.png) - -- [中文文档(for v1.11.x版本)](README_CN.md) -- [English document(for v1.11.x version)](README.md) -- [中文文档(for v1.9.x版本)](v1.9/README_CN.md) -- [English document(for v1.9.x version)](v1.9/README.md) -- [中文文档(for v1.7.x版本)](v1.7/README_CN.md) -- [English document(for v1.7.x version)](v1.7/README.md) -- [中文文档(for v1.6.x版本)](v1.6/README_CN.md) -- [English document(for v1.6.x version)](v1.6/README.md) - ---- - -- [GitHub项目地址](https://github.com/cookeem/kubeadm-ha/) -- [OSChina项目地址](https://git.oschina.net/cookeem/kubeadm-ha/) - ---- - -- 该指引适用于v1.11.x版本的kubernetes集群 - -> v1.11.x版本支持在control plane上启动TLS的etcd高可用集群。 - -### 目录 - -1. [部署架构](#部署架构) - 1. [概要部署架构](#概要部署架构) - 1. [详细部署架构](#详细部署架构) - 1. [主机节点清单](#主机节点清单) -1. [安装前准备](#安装前准备) - 1. [版本信息](#版本信息) - 1. [所需docker镜像](#所需docker镜像) - 1. [系统设置](#系统设置) -1. [kubernetes安装](#kubernetes安装) - 1. [firewalld和iptables相关端口设置](#firewalld和iptables相关端口设置) - 1. [kubernetes相关服务安装](#kubernetes相关服务安装) - 1. [master节点互信设置](#master节点互信设置) -1. [master高可用安装](#master高可用安装) - 1. [配置文件初始化](#配置文件初始化) - 1. [kubeadm初始化](#kubeadm初始化) - 1. [高可用配置](#高可用配置) -1. [master负载均衡设置](#master负载均衡设置) - 1. [keepalived安装配置](#keepalived安装配置) - 1. [nginx负载均衡配置](#nginx负载均衡配置) - 1. [kube-proxy高可用设置](#kube-proxy高可用设置) - 1. [验证高可用状态](#验证高可用状态) - 1. [基础组件安装](#基础组件安装) -1. [worker节点设置](#worker节点设置) - 1. [worker加入高可用集群](#worker加入高可用集群) -1. [集群验证](#集群验证) - 1. [验证集群高可用设置](#验证集群高可用设置) - -### 部署架构 - -#### 概要部署架构 - -![ha logo](images/ha.png) - -- kubernetes高可用的核心架构是master的高可用,kubectl、客户端以及nodes访问load balancer实现高可用。 - ---- -[返回目录](#目录) - -#### 详细部署架构 - -![k8s ha](images/k8s-ha.png) - -- kubernetes组件说明 - -> kube-apiserver:集群核心,集群API接口、集群各个组件通信的中枢;集群安全控制; -> etcd:集群的数据中心,用于存放集群的配置以及状态信息,非常重要,如果数据丢失那么集群将无法恢复;因此高可用集群部署首先就是etcd是高可用集群; -> kube-scheduler:集群Pod的调度中心;默认kubeadm安装情况下--leader-elect参数已经设置为true,保证master集群中只有一个kube-scheduler处于活跃状态; -> kube-controller-manager:集群状态管理器,当集群状态与期望不同时,kcm会努力让集群恢复期望状态,比如:当一个pod死掉,kcm会努力新建一个pod来恢复对应replicas set期望的状态;默认kubeadm安装情况下--leader-elect参数已经设置为true,保证master集群中只有一个kube-controller-manager处于活跃状态; -> kubelet: kubernetes node agent,负责与node上的docker engine打交道; -> kube-proxy: 每个node上一个,负责service vip到endpoint pod的流量转发,当前主要通过设置iptables规则实现。 - -- 负载均衡 - -> keepalived集群设置一个虚拟ip地址,虚拟ip地址指向k8s-master01、k8s-master02、k8s-master03。 -> nginx用于k8s-master01、k8s-master02、k8s-master03的apiserver的负载均衡。外部kubectl以及nodes访问apiserver的时候就可以用过keepalived的虚拟ip(192.168.20.10)以及nginx端口(16443)访问master集群的apiserver。 - ---- - -[返回目录](#目录) - -#### 主机节点清单 - -主机名 | IP地址 | 说明 | 组件 -:--- | :--- | :--- | :--- -k8s-master01 ~ 03 | 192.168.20.20 ~ 22 | master节点 * 3 | keepalived、nginx、etcd、kubelet、kube-apiserver -k8s-master-lb | 192.168.20.10 | keepalived虚拟IP | 无 -k8s-node01 ~ 08 | 192.168.20.30 ~ 37 | worker节点 * 8 | kubelet - ---- - -[返回目录](#目录) - -### 安装前准备 - -#### 版本信息 - -- Linux版本:CentOS 7.4.1708 - -- 内核版本: 4.6.4-1.el7.elrepo.x86_64 - -```sh -$ cat /etc/redhat-release -CentOS Linux release 7.4.1708 (Core) - -$ uname -r -4.6.4-1.el7.elrepo.x86_64 -``` - -- docker版本:17.12.0-ce-rc2 - -```sh -$ docker version -Client: - Version: 17.12.0-ce-rc2 - API version: 1.35 - Go version: go1.9.2 - Git commit: f9cde63 - Built: Tue Dec 12 06:42:20 2017 - OS/Arch: linux/amd64 - -Server: - Engine: - Version: 17.12.0-ce-rc2 - API version: 1.35 (minimum version 1.12) - Go version: go1.9.2 - Git commit: f9cde63 - Built: Tue Dec 12 06:44:50 2017 - OS/Arch: linux/amd64 - Experimental: false -``` - -- kubeadm版本:v1.11.1 - -```sh -$ kubeadm version -kubeadm version: &version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:50:16Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"} -``` - -- kubelet版本:v1.11.1 - -```sh -$ kubelet --version -Kubernetes v1.11.1 -``` - -- 网络组件 - -> calico - ---- - -[返回目录](#目录) - -#### 所需docker镜像 - -- 相关docker镜像以及版本 - -```sh -# kuberentes basic components - -# 通过kubeadm 获取基础组件镜像清单 -$ kubeadm config images list --kubernetes-version=v1.11.1 -k8s.gcr.io/kube-apiserver-amd64:v1.11.1 -k8s.gcr.io/kube-controller-manager-amd64:v1.11.1 -k8s.gcr.io/kube-scheduler-amd64:v1.11.1 -k8s.gcr.io/kube-proxy-amd64:v1.11.1 -k8s.gcr.io/pause:3.1 -k8s.gcr.io/etcd-amd64:3.2.18 -k8s.gcr.io/coredns:1.1.3 - -# 通过kubeadm 拉取基础镜像 -$ kubeadm config images pull --kubernetes-version=v1.11.1 - -# kubernetes networks add ons -$ docker pull quay.io/calico/typha:v0.7.4 -$ docker pull quay.io/calico/node:v3.1.3 -$ docker pull quay.io/calico/cni:v3.1.3 - -# kubernetes metrics server -$ docker pull gcr.io/google_containers/metrics-server-amd64:v0.2.1 - -# kubernetes dashboard -$ docker pull gcr.io/google_containers/kubernetes-dashboard-amd64:v1.8.3 - -# kubernetes heapster -$ docker pull k8s.gcr.io/heapster-amd64:v1.5.4 -$ docker pull k8s.gcr.io/heapster-influxdb-amd64:v1.5.2 -$ docker pull k8s.gcr.io/heapster-grafana-amd64:v5.0.4 - -# kubernetes apiserver load balancer -$ docker pull nginx:latest - -# prometheus -$ docker pull prom/prometheus:v2.3.1 - -# traefik -$ docker pull traefik:v1.6.3 - -# istio -$ docker pull docker.io/jaegertracing/all-in-one:1.5 -$ docker pull docker.io/prom/prometheus:v2.3.1 -$ docker pull docker.io/prom/statsd-exporter:v0.6.0 -$ docker pull gcr.io/istio-release/citadel:1.0.0 -$ docker pull gcr.io/istio-release/galley:1.0.0 -$ docker pull gcr.io/istio-release/grafana:1.0.0 -$ docker pull gcr.io/istio-release/mixer:1.0.0 -$ docker pull gcr.io/istio-release/pilot:1.0.0 -$ docker pull gcr.io/istio-release/proxy_init:1.0.0 -$ docker pull gcr.io/istio-release/proxyv2:1.0.0 -$ docker pull gcr.io/istio-release/servicegraph:1.0.0 -$ docker pull gcr.io/istio-release/sidecar_injector:1.0.0 -$ docker pull quay.io/coreos/hyperkube:v1.7.6_coreos.0 -``` - ---- - -[返回目录](#目录) - -#### 系统设置 - -- 在所有kubernetes节点上增加kubernetes仓库 - -```sh -$ cat < /etc/yum.repos.d/kubernetes.repo -[kubernetes] -name=Kubernetes -baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 -enabled=1 -gpgcheck=1 -repo_gpgcheck=1 -gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg -exclude=kube* -EOF -``` - -- 在所有kubernetes节点上进行系统更新 - -```sh -$ yum update -y -``` - -- 在所有kubernetes节点上设置SELINUX为permissive模式 - -```sh -$ vi /etc/selinux/config -SELINUX=permissive - -$ setenforce 0 -``` - -- 在所有kubernetes节点上设置iptables参数 - -```sh -$ cat < /etc/sysctl.d/k8s.conf -net.bridge.bridge-nf-call-ip6tables = 1 -net.bridge.bridge-nf-call-iptables = 1 -net.ipv4.ip_forward = 1 -EOF - -$ sysctl --system -``` - -- 在所有kubernetes节点上禁用swap - -```sh -$ swapoff -a - -# 禁用fstab中的swap项目 -$ vi /etc/fstab -#/dev/mapper/centos-swap swap swap defaults 0 0 - -# 确认swap已经被禁用 -$ cat /proc/swaps -Filename Type Size Used Priority -``` - -- 在所有kubernetes节点上重启主机 - -```sh -# 重启主机 -$ reboot -``` - ---- - -[返回目录](#目录) - -### kubernetes安装 - -#### firewalld和iptables相关端口设置 - -- 所有节点开启防火墙 - -```sh -# 重启防火墙 -$ systemctl enable firewalld -$ systemctl restart firewalld -$ systemctl status firewalld -``` - -- 相关端口(master) - -协议 | 方向 | 端口 | 说明 -:--- | :--- | :--- | :--- -TCP | Inbound | 16443* | Load balancer Kubernetes API server port -TCP | Inbound | 6443* | Kubernetes API server -TCP | Inbound | 4001 | etcd listen client port -TCP | Inbound | 2379-2380 | etcd server client API -TCP | Inbound | 10250 | Kubelet API -TCP | Inbound | 10251 | kube-scheduler -TCP | Inbound | 10252 | kube-controller-manager -TCP | Inbound | 10255 | Read-only Kubelet API (Deprecated) -TCP | Inbound | 30000-32767 | NodePort Services - -- 设置防火墙策略 - -```sh -$ firewall-cmd --zone=public --add-port=16443/tcp --permanent -$ firewall-cmd --zone=public --add-port=6443/tcp --permanent -$ firewall-cmd --zone=public --add-port=4001/tcp --permanent -$ firewall-cmd --zone=public --add-port=2379-2380/tcp --permanent -$ firewall-cmd --zone=public --add-port=10250/tcp --permanent -$ firewall-cmd --zone=public --add-port=10251/tcp --permanent -$ firewall-cmd --zone=public --add-port=10252/tcp --permanent -$ firewall-cmd --zone=public --add-port=30000-32767/tcp --permanent - -$ firewall-cmd --reload - -$ firewall-cmd --list-all --zone=public -public (active) - target: default - icmp-block-inversion: no - interfaces: ens2f1 ens1f0 nm-bond - sources: - services: ssh dhcpv6-client - ports: 4001/tcp 6443/tcp 2379-2380/tcp 10250/tcp 10251/tcp 10252/tcp 30000-32767/tcp - protocols: - masquerade: no - forward-ports: - source-ports: - icmp-blocks: - rich rules: -``` - -- 相关端口(worker) - -协议 | 方向 | 端口 | 说明 -:--- | :--- | :--- | :--- -TCP | Inbound | 10250 | Kubelet API -TCP | Inbound | 30000-32767 | NodePort Services - -- 设置防火墙策略 - -```sh -$ firewall-cmd --zone=public --add-port=10250/tcp --permanent -$ firewall-cmd --zone=public --add-port=30000-32767/tcp --permanent - -$ firewall-cmd --reload - -$ firewall-cmd --list-all --zone=public -public (active) - target: default - icmp-block-inversion: no - interfaces: ens2f1 ens1f0 nm-bond - sources: - services: ssh dhcpv6-client - ports: 10250/tcp 30000-32767/tcp - protocols: - masquerade: no - forward-ports: - source-ports: - icmp-blocks: - rich rules: -``` - -- 在所有kubernetes节点上允许kube-proxy的forward - -```sh -$ firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 1 -i docker0 -j ACCEPT -m comment --comment "kube-proxy redirects" -$ firewall-cmd --permanent --direct --add-rule ipv4 filter FORWARD 1 -o docker0 -j ACCEPT -m comment --comment "docker subnet" -$ firewall-cmd --reload - -$ firewall-cmd --direct --get-all-rules -ipv4 filter INPUT 1 -i docker0 -j ACCEPT -m comment --comment 'kube-proxy redirects' -ipv4 filter FORWARD 1 -o docker0 -j ACCEPT -m comment --comment 'docker subnet' - -# 重启防火墙 -$ systemctl restart firewalld -``` - -- 解决kube-proxy无法启用nodePort,重启firewalld必须执行以下命令,在所有节点设置定时任务 - -```sh -$ crontab -e -0,5,10,15,20,25,30,35,40,45,50,55 * * * * /usr/sbin/iptables -D INPUT -j REJECT --reject-with icmp-host-prohibited -``` - ---- - -[返回目录](#目录) - -#### kubernetes相关服务安装 - -- 在所有kubernetes节点上安装并启动kubernetes - -```sh -$ yum install -y docker-ce-17.12.0.ce-0.2.rc2.el7.centos.x86_64 -$ yum install -y docker-compose-1.9.0-5.el7.noarch -$ systemctl enable docker && systemctl start docker - -$ yum install -y kubelet-1.11.1-0.x86_64 kubeadm-1.11.1-0.x86_64 kubectl-1.11.1-0.x86_64 -$ systemctl enable kubelet && systemctl start kubelet -``` - -- 在所有master节点安装并启动keepalived - -```sh -$ yum install -y keepalived -$ systemctl enable keepalived && systemctl restart keepalived -``` - -#### master节点互信设置 - -- 在k8s-master01节点上设置节点互信 - -```sh -$ rm -rf /root/.ssh/* -$ ssh k8s-master01 pwd -$ ssh k8s-master02 rm -rf /root/.ssh/* -$ ssh k8s-master03 rm -rf /root/.ssh/* -$ ssh k8s-master02 mkdir -p /root/.ssh/ -$ ssh k8s-master03 mkdir -p /root/.ssh/ - -$ scp /root/.ssh/known_hosts root@k8s-master02:/root/.ssh/ -$ scp /root/.ssh/known_hosts root@k8s-master03:/root/.ssh/ - -$ ssh-keygen -t rsa -P '' -f /root/.ssh/id_rsa -$ cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys -$ scp /root/.ssh/authorized_keys root@k8s-master02:/root/.ssh/ -``` - -- 在k8s-master02节点上设置节点互信 - -```sh -$ ssh-keygen -t rsa -P '' -f /root/.ssh/id_rsa -$ cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys -$ scp /root/.ssh/authorized_keys root@k8s-master03:/root/.ssh/ -``` - -- 在k8s-master03节点上设置节点互信 - -```sh -$ ssh-keygen -t rsa -P '' -f /root/.ssh/id_rsa -$ cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys -$ scp /root/.ssh/authorized_keys root@k8s-master01:/root/.ssh/ -$ scp /root/.ssh/authorized_keys root@k8s-master02:/root/.ssh/ -``` - ---- - -[返回目录](#目录) - -### master高可用安装 - -#### 配置文件初始化 - -- 在k8s-master01上克隆kubeadm-ha项目源码 - -```sh -$ git clone https://github.com/cookeem/kubeadm-ha -``` - -- 在k8s-master01上通过`create-config.sh`脚本创建相关配置文件 - -```sh -$ cd kubeadm-ha - -# 根据create-config.sh的提示,修改以下配置信息 -$ vi create-config.sh -# master keepalived virtual ip address -export K8SHA_VIP=192.168.60.79 -# master01 ip address -export K8SHA_IP1=192.168.60.72 -# master02 ip address -export K8SHA_IP2=192.168.60.77 -# master03 ip address -export K8SHA_IP3=192.168.60.78 -# master keepalived virtual ip hostname -export K8SHA_VHOST=k8s-master-lb -# master01 hostname -export K8SHA_HOST1=k8s-master01 -# master02 hostname -export K8SHA_HOST2=k8s-master02 -# master03 hostname -export K8SHA_HOST3=k8s-master03 -# master01 network interface name -export K8SHA_NETINF1=nm-bond -# master02 network interface name -export K8SHA_NETINF2=nm-bond -# master03 network interface name -export K8SHA_NETINF3=nm-bond -# keepalived auth_pass config -export K8SHA_KEEPALIVED_AUTH=412f7dc3bfed32194d1600c483e10ad1d -# calico reachable ip address -export K8SHA_CALICO_REACHABLE_IP=192.168.60.1 -# kubernetes CIDR pod subnet, if CIDR pod subnet is "172.168.0.0/16" please set to "172.168.0.0" -export K8SHA_CIDR=172.168.0.0 - -# 以下脚本会创建3个master节点的kubeadm配置文件,keepalived配置文件,nginx负载均衡配置文件,以及calico配置文件 -$ ./create-config.sh -create kubeadm-config.yaml files success. config/k8s-master01/kubeadm-config.yaml -create kubeadm-config.yaml files success. config/k8s-master02/kubeadm-config.yaml -create kubeadm-config.yaml files success. config/k8s-master03/kubeadm-config.yaml -create keepalived files success. config/k8s-master01/keepalived/ -create keepalived files success. config/k8s-master02/keepalived/ -create keepalived files success. config/k8s-master03/keepalived/ -create nginx-lb files success. config/k8s-master01/nginx-lb/ -create nginx-lb files success. config/k8s-master02/nginx-lb/ -create nginx-lb files success. config/k8s-master03/nginx-lb/ -create calico.yaml file success. calico/calico.yaml - -# 设置相关hostname变量 -$ export HOST1=k8s-master01 -$ export HOST2=k8s-master02 -$ export HOST3=k8s-master03 - -# 把kubeadm配置文件放到各个master节点的/root/目录 -$ scp -r config/$HOST1/kubeadm-config.yaml $HOST1:/root/ -$ scp -r config/$HOST2/kubeadm-config.yaml $HOST2:/root/ -$ scp -r config/$HOST3/kubeadm-config.yaml $HOST3:/root/ - -# 把keepalived配置文件放到各个master节点的/etc/keepalived/目录 -$ scp -r config/$HOST1/keepalived/* $HOST1:/etc/keepalived/ -$ scp -r config/$HOST2/keepalived/* $HOST2:/etc/keepalived/ -$ scp -r config/$HOST3/keepalived/* $HOST3:/etc/keepalived/ - -# 把nginx负载均衡配置文件放到各个master节点的/root/目录 -$ scp -r config/$HOST1/nginx-lb $HOST1:/root/ -$ scp -r config/$HOST2/nginx-lb $HOST2:/root/ -$ scp -r config/$HOST3/nginx-lb $HOST3:/root/ -``` - ---- - -[返回目录](#目录) - -#### kubeadm初始化 - -- 在k8s-master01节点上使用kubeadm进行kubernetes集群初始化 - -```sh -# 执行kubeadm init之后务必记录执行结果输出的${YOUR_TOKEN}以及${YOUR_DISCOVERY_TOKEN_CA_CERT_HASH} -$ kubeadm init --config /root/kubeadm-config.yaml -kubeadm join 192.168.20.20:6443 --token ${YOUR_TOKEN} --discovery-token-ca-cert-hash sha256:${YOUR_DISCOVERY_TOKEN_CA_CERT_HASH} -``` - -- 在所有master节点上设置kubectl的配置文件变量 - -```sh -$ cat <> ~/.bashrc -export KUBECONFIG=/etc/kubernetes/admin.conf -EOF - -$ source ~/.bashrc - -# 验证是否可以使用kubectl客户端连接集群 -$ kubectl get nodes -``` - -- 在k8s-master01节点上等待 etcd / kube-apiserver / kube-controller-manager / kube-scheduler 启动 - -```sh -$ kubectl get pods -n kube-system -o wide -NAME READY STATUS RESTARTS AGE IP NODE -... -etcd-k8s-master01 1/1 Running 0 18m 192.168.20.20 k8s-master01 -kube-apiserver-k8s-master01 1/1 Running 0 18m 192.168.20.20 k8s-master01 -kube-controller-manager-k8s-master01 1/1 Running 0 18m 192.168.20.20 k8s-master01 -kube-scheduler-k8s-master01 1/1 Running 1 18m 192.168.20.20 k8s-master01 -... -``` - ---- - -[返回目录](#目录) - -#### 高可用配置 - -- 在k8s-master01上把证书复制到其他master - -```sh -# 根据实际情况修改以下HOSTNAMES变量 -$ export CONTROL_PLANE_IPS="k8s-master02 k8s-master03" - -# 把证书复制到其他master节点 -$ for host in ${CONTROL_PLANE_IPS}; do - scp /etc/kubernetes/pki/ca.crt $host:/etc/kubernetes/pki/ca.crt - scp /etc/kubernetes/pki/ca.key $host:/etc/kubernetes/pki/ca.key - scp /etc/kubernetes/pki/sa.key $host:/etc/kubernetes/pki/sa.key - scp /etc/kubernetes/pki/sa.pub $host:/etc/kubernetes/pki/sa.pub - scp /etc/kubernetes/pki/front-proxy-ca.crt $host:/etc/kubernetes/pki/front-proxy-ca.crt - scp /etc/kubernetes/pki/front-proxy-ca.key $host:/etc/kubernetes/pki/front-proxy-ca.key - scp /etc/kubernetes/pki/etcd/ca.crt $host:/etc/kubernetes/pki/etcd/ca.crt - scp /etc/kubernetes/pki/etcd/ca.key $host:/etc/kubernetes/pki/etcd/ca.key - scp /etc/kubernetes/admin.conf $host:/etc/kubernetes/admin.conf -done -``` - -- 在k8s-master02上把节点加入集群 - -```sh -# 创建相关的证书以及kubelet配置文件 -$ kubeadm alpha phase certs all --config /root/kubeadm-config.yaml -$ kubeadm alpha phase kubeconfig controller-manager --config /root/kubeadm-config.yaml -$ kubeadm alpha phase kubeconfig scheduler --config /root/kubeadm-config.yaml -$ kubeadm alpha phase kubelet config write-to-disk --config /root/kubeadm-config.yaml -$ kubeadm alpha phase kubelet write-env-file --config /root/kubeadm-config.yaml -$ kubeadm alpha phase kubeconfig kubelet --config /root/kubeadm-config.yaml -$ systemctl restart kubelet - -# 设置k8s-master01以及k8s-master02的HOSTNAME以及地址 -$ export CP0_IP=192.168.20.20 -$ export CP0_HOSTNAME=k8s-master01 -$ export CP1_IP=192.168.20.21 -$ export CP1_HOSTNAME=k8s-master02 - -# etcd集群添加节点 -$ kubectl exec -n kube-system etcd-${CP0_HOSTNAME} -- etcdctl --ca-file /etc/kubernetes/pki/etcd/ca.crt --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --endpoints=https://${CP0_IP}:2379 member add ${CP1_HOSTNAME} https://${CP1_IP}:2380 -$ kubeadm alpha phase etcd local --config /root/kubeadm-config.yaml - -# 启动master节点 -$ kubeadm alpha phase kubeconfig all --config /root/kubeadm-config.yaml -$ kubeadm alpha phase controlplane all --config /root/kubeadm-config.yaml -$ kubeadm alpha phase mark-master --config /root/kubeadm-config.yaml - -# 修改/etc/kubernetes/admin.conf的服务地址指向本机 -$ sed -i "s/192.168.20.20:6443/192.168.20.21:6443/g" /etc/kubernetes/admin.conf -``` - -- 在k8s-master03上把节点加入集群 - -```sh -# 创建相关的证书以及kubelet配置文件 -$ kubeadm alpha phase certs all --config /root/kubeadm-config.yaml -$ kubeadm alpha phase kubeconfig controller-manager --config /root/kubeadm-config.yaml -$ kubeadm alpha phase kubeconfig scheduler --config /root/kubeadm-config.yaml -$ kubeadm alpha phase kubelet config write-to-disk --config /root/kubeadm-config.yaml -$ kubeadm alpha phase kubelet write-env-file --config /root/kubeadm-config.yaml -$ kubeadm alpha phase kubeconfig kubelet --config /root/kubeadm-config.yaml -$ systemctl restart kubelet - -# 设置k8s-master01以及k8s-master03的HOSTNAME以及地址 -$ export CP0_IP=192.168.20.20 -$ export CP0_HOSTNAME=k8s-master01 -$ export CP2_IP=192.168.20.22 -$ export CP2_HOSTNAME=k8s-master03 - -# etcd集群添加节点 -$ kubectl exec -n kube-system etcd-${CP0_HOSTNAME} -- etcdctl --ca-file /etc/kubernetes/pki/etcd/ca.crt --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --endpoints=https://${CP0_IP}:2379 member add ${CP2_HOSTNAME} https://${CP2_IP}:2380 -$ kubeadm alpha phase etcd local --config /root/kubeadm-config.yaml - -# 启动master节点 -$ kubeadm alpha phase kubeconfig all --config /root/kubeadm-config.yaml -$ kubeadm alpha phase controlplane all --config /root/kubeadm-config.yaml -$ kubeadm alpha phase mark-master --config /root/kubeadm-config.yaml - -# 修改/etc/kubernetes/admin.conf的服务地址指向本机 -$ sed -i "s/192.168.20.20:6443/192.168.20.22:6443/g" /etc/kubernetes/admin.conf -``` - -- 在所有master节点上允许hpa通过接口采集数据,修改`/etc/kubernetes/manifests/kube-controller-manager.yaml` - -```sh -$ vi /etc/kubernetes/manifests/kube-controller-manager.yaml - - --horizontal-pod-autoscaler-use-rest-clients=false -``` - -- 在所有master上允许istio的自动注入,修改`/etc/kubernetes/manifests/kube-apiserver.yaml` - -```sh -$ vi /etc/kubernetes/manifests/kube-apiserver.yaml - - --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota - -# 重启服务 -systemctl restart kubelet -``` - ---- - -[返回目录](#目录) - -### master负载均衡设置 - -#### keepalived安装配置 - -- 在所有master节点上重启keepalived - -```sh -$ systemctl restart keepalived -$ systemctl status keepalived - -# 检查keepalived的vip是否生效 -$ curl -k https://k8s-master-lb:6443 -``` - ---- - -[返回目录](#目录) - -#### nginx负载均衡配置 - -- 在所有master节点上启动nginx-lb - -```sh -# 使用docker-compose启动nginx负载均衡 -$ docker-compose --file=/root/nginx-lb/docker-compose.yaml up -d -$ docker-compose --file=/root/nginx-lb/docker-compose.yaml ps - -# 验证负载均衡的16443端口是否生效 -$ curl -k https://k8s-master-lb:16443 -``` - ---- - -[返回目录](#目录) - -#### kube-proxy高可用设置 - -- 在任意master节点上设置kube-proxy高可用 - -```sh -# 修改kube-proxy的configmap,把server指向load-balance地址和端口 -$ kubectl edit -n kube-system configmap/kube-proxy - server: https://192.168.20.10:16443 -``` - -- 在任意master节点上重启kube-proxy - -```sh -# 查找对应的kube-proxy pods -$ kubectl get pods --all-namespaces -o wide | grep proxy - -# 删除并重启对应的kube-proxy pods -$ kubectl delete pod -n kube-system kube-proxy-XXX -``` - ---- - -[返回目录](#目录) - -#### 验证高可用状态 - -- 在任意master节点上验证服务启动情况 - -```sh -# 检查节点情况 -$ kubectl get nodes -NAME STATUS ROLES AGE VERSION -k8s-master01 Ready master 1h v1.11.1 -k8s-master02 Ready master 58m v1.11.1 -k8s-master03 Ready master 55m v1.11.1 - -# 检查pods运行情况 -$ kubectl get pods -n kube-system -o wide -NAME READY STATUS RESTARTS AGE IP NODE -calico-node-nxskr 2/2 Running 0 46m 192.168.20.22 k8s-master03 -calico-node-xv5xt 2/2 Running 0 46m 192.168.20.20 k8s-master01 -calico-node-zsmgp 2/2 Running 0 46m 192.168.20.21 k8s-master02 -coredns-78fcdf6894-kfzc7 1/1 Running 0 1h 172.168.2.3 k8s-master03 -coredns-78fcdf6894-t957l 1/1 Running 0 46m 172.168.1.2 k8s-master02 -etcd-k8s-master01 1/1 Running 0 1h 192.168.20.20 k8s-master01 -etcd-k8s-master02 1/1 Running 0 58m 192.168.20.21 k8s-master02 -etcd-k8s-master03 1/1 Running 0 54m 192.168.20.22 k8s-master03 -kube-apiserver-k8s-master01 1/1 Running 0 52m 192.168.20.20 k8s-master01 -kube-apiserver-k8s-master02 1/1 Running 0 52m 192.168.20.21 k8s-master02 -kube-apiserver-k8s-master03 1/1 Running 0 51m 192.168.20.22 k8s-master03 -kube-controller-manager-k8s-master01 1/1 Running 0 34m 192.168.20.20 k8s-master01 -kube-controller-manager-k8s-master02 1/1 Running 0 33m 192.168.20.21 k8s-master02 -kube-controller-manager-k8s-master03 1/1 Running 0 33m 192.168.20.22 k8s-master03 -kube-proxy-g9749 1/1 Running 0 36m 192.168.20.22 k8s-master03 -kube-proxy-lhzhb 1/1 Running 0 35m 192.168.20.20 k8s-master01 -kube-proxy-x8jwt 1/1 Running 0 36m 192.168.20.21 k8s-master02 -kube-scheduler-k8s-master01 1/1 Running 1 1h 192.168.20.20 k8s-master01 -kube-scheduler-k8s-master02 1/1 Running 0 57m 192.168.20.21 k8s-master02 -kube-scheduler-k8s-master03 1/1 Running 1 54m 192.168.20.22 k8s-master03 -``` - ---- - -[返回目录](#目录) - -#### 基础组件安装 - -- 在任意master节点上允许master上部署pod - -```sh -$ kubectl taint nodes --all node-role.kubernetes.io/master- -``` - -- 在任意master节点上安装calico - -```sh -$ kubectl apply -f calico/ -``` - -- 在任意master节点上安装metrics-server,从v1.11.0开始,性能采集不再采用heapster采集pod性能数据,而是使用metrics-server - -```sh -$ kubectl apply -f metrics-server/ - -# 等待5分钟,查看性能数据是否正常收集 -$ kubectl top pods -n kube-system -NAME CPU(cores) MEMORY(bytes) -calico-node-wkstv 47m 113Mi -calico-node-x2sn5 36m 104Mi -calico-node-xnh6s 32m 106Mi -coredns-78fcdf6894-2xc6s 14m 30Mi -coredns-78fcdf6894-rk6ch 10m 22Mi -kube-apiserver-k8s-master01 163m 816Mi -kube-apiserver-k8s-master02 79m 617Mi -kube-apiserver-k8s-master03 73m 614Mi -kube-controller-manager-k8s-master01 52m 141Mi -kube-controller-manager-k8s-master02 0m 14Mi -kube-controller-manager-k8s-master03 0m 13Mi -kube-proxy-269t2 4m 21Mi -kube-proxy-6jc8n 9m 37Mi -kube-proxy-7n8xb 9m 39Mi -kube-scheduler-k8s-master01 20m 25Mi -kube-scheduler-k8s-master02 15m 19Mi -kube-scheduler-k8s-master03 15m 19Mi -metrics-server-77b77f5fc6-jm8t6 3m 43Mi -``` - -- 在任意master节点上安装heapster,从v1.11.0开始,性能采集不再采用heapster采集pod性能数据,而是使用metrics-server,但是dashboard依然使用heapster呈现性能数据 - -```sh -# 安装heapster,需要等待5分钟,等待性能数据采集 -$ kubectl apply -f heapster/ -``` - -- 在任意master节点上安装dashboard - -```sh -# 安装dashboard -$ kubectl apply -f dashboard/ -``` - -> 成功安装后访问以下网址打开dashboard的登录界面,该界面提示需要登录token: https://k8s-master-lb:30000/ - -![dashboard-login](images/dashboard-login.png) - -- 在任意master节点上获取dashboard的登录token - -```sh -# 获取dashboard的登录token -$ kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}') -``` - -> 使用token进行登录,进入后可以看到heapster采集的各个pod以及节点的性能数据 - -![dashboard](images/dashboard.png) - -- 在任意master节点上安装traefik - -```sh -# 创建k8s-master-lb域名的证书 -$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=k8s-master-lb" - -# 把证书写入到secret -kubectl -n kube-system create secret generic traefik-cert --from-file=tls.key --from-file=tls.crt - -# 安装traefik -$ kubectl apply -f traefik/ -``` - -> 成功安装后访问以下网址打开traefik管理界面: http://k8s-master-lb:30011/ - -![traefik](images/traefik.png) - -- 在任意master节点上安装istio - -```sh -# 安装istio -$ kubectl apply -f istio/ - -# 检查istio服务相关pods -$ kubectl get pods -n istio-system -NAME READY STATUS RESTARTS AGE -grafana-69c856fc69-jbx49 1/1 Running 1 21m -istio-citadel-7c4fc8957b-vdbhp 1/1 Running 1 21m -istio-cleanup-secrets-5g95n 0/1 Completed 0 21m -istio-egressgateway-64674bd988-44fg8 1/1 Running 0 18m -istio-egressgateway-64674bd988-dgvfm 1/1 Running 1 16m -istio-egressgateway-64674bd988-fprtc 1/1 Running 0 18m -istio-egressgateway-64674bd988-kl6pw 1/1 Running 3 16m -istio-egressgateway-64674bd988-nphpk 1/1 Running 3 16m -istio-galley-595b94cddf-c5ctw 1/1 Running 70 21m -istio-grafana-post-install-nhs47 0/1 Completed 0 21m -istio-ingressgateway-4vtk5 1/1 Running 2 21m -istio-ingressgateway-5rscp 1/1 Running 3 21m -istio-ingressgateway-6z95f 1/1 Running 3 21m -istio-policy-589977bff5-jx5fd 2/2 Running 3 21m -istio-policy-589977bff5-n74q8 2/2 Running 3 21m -istio-sidecar-injector-86c4d57d56-mfnbp 1/1 Running 39 21m -istio-statsd-prom-bridge-5698d5798c-xdpp6 1/1 Running 1 21m -istio-telemetry-85d6475bfd-8lvsm 2/2 Running 2 21m -istio-telemetry-85d6475bfd-bfjsn 2/2 Running 2 21m -istio-telemetry-85d6475bfd-d9ld9 2/2 Running 2 21m -istio-tracing-bd5765b5b-cmszp 1/1 Running 1 21m -prometheus-77c5fc7cd-zf7zr 1/1 Running 1 21m -servicegraph-6b99c87849-l6zm6 1/1 Running 1 21m -``` - -- 在任意master节点上安装prometheus - -```sh -# 安装prometheus -$ kubectl apply -f prometheus/ -``` - -> 成功安装后访问以下网址打开prometheus管理界面,查看相关性能采集数据: http://k8s-master-lb:30013/ - -![prometheus](images/prometheus.png) - ---- - -[返回目录](#目录) - -### worker节点设置 - -#### worker加入高可用集群 - -- 在所有workers节点上,使用kubeadm join加入kubernetes集群 - -```sh -# 清理节点上的kubernetes配置信息 -$ kubeadm reset - -# 使用之前kubeadm init执行结果记录的${YOUR_TOKEN}以及${YOUR_DISCOVERY_TOKEN_CA_CERT_HASH},把worker节点加入到集群 -$ kubeadm join 192.168.20.20:6443 --token ${YOUR_TOKEN} --discovery-token-ca-cert-hash sha256:${YOUR_DISCOVERY_TOKEN_CA_CERT_HASH} - - -# 在workers上修改kubernetes集群设置,让server指向nginx负载均衡的ip和端口 -$ sed -i "s/192.168.20.20:6443/192.168.20.10:16443/g" /etc/kubernetes/bootstrap-kubelet.conf -$ sed -i "s/192.168.20.20:6443/192.168.20.10:16443/g" /etc/kubernetes/kubelet.conf - -# 重启本节点 -$ systemctl restart docker kubelet -``` - -- 在任意master节点上验证节点状态 - -```sh -$ kubectl get nodes -NAME STATUS ROLES AGE VERSION -k8s-master01 Ready master 1h v1.11.1 -k8s-master02 Ready master 58m v1.11.1 -k8s-master03 Ready master 55m v1.11.1 -k8s-node01 Ready 30m v1.11.1 -k8s-node02 Ready 24m v1.11.1 -k8s-node03 Ready 22m v1.11.1 -k8s-node04 Ready 22m v1.11.1 -k8s-node05 Ready 16m v1.11.1 -k8s-node06 Ready 13m v1.11.1 -k8s-node07 Ready 11m v1.11.1 -k8s-node08 Ready 10m v1.11.1 -``` - ---- - -[返回目录](#目录) - -### 集群验证 - -#### 验证集群高可用设置 - -- 验证集群高可用 - -```sh -# 创建一个replicas=3的nginx deployment -$ kubectl run nginx --image=nginx --replicas=3 --port=80 -deployment "nginx" created - -# 检查nginx pod的创建情况 -$ kubectl get pods -l=run=nginx -o wide -NAME READY STATUS RESTARTS AGE IP NODE -nginx-58b94844fd-jvlqh 1/1 Running 0 9s 172.168.7.2 k8s-node05 -nginx-58b94844fd-mkt72 1/1 Running 0 9s 172.168.9.2 k8s-node07 -nginx-58b94844fd-xhb8x 1/1 Running 0 9s 172.168.11.2 k8s-node09 - -# 创建nginx的NodePort service -$ kubectl expose deployment nginx --type=NodePort --port=80 -service "nginx" exposed - -# 检查nginx service的创建情况 -$ kubectl get svc -l=run=nginx -o wide -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR -nginx NodePort 10.106.129.121 80:31443/TCP 7s run=nginx - -# 检查nginx NodePort service是否正常提供服务 -$ curl k8s-master-lb:31443 - - - -Welcome to nginx! - - - -

Welcome to nginx!

-

If you see this page, the nginx web server is successfully installed and -working. Further configuration is required.

- -

For online documentation and support please refer to -nginx.org.
-Commercial support is available at -nginx.com.

- -

Thank you for using nginx.

- - -``` - -- pod之间互访测试 - -```sh -# 启动一个client测试nginx是否可以访问 -kubectl run nginx-client -ti --rm --image=alpine -- ash -/ # wget -O - nginx -Connecting to nginx (10.102.101.78:80) -index.html 100% |*****************************************| 612 0:00:00 ETA - - - - -Welcome to nginx! - - - -

Welcome to nginx!

-

If you see this page, the nginx web server is successfully installed and -working. Further configuration is required.

- -

For online documentation and support please refer to -nginx.org.
-Commercial support is available at -nginx.com.

- -

Thank you for using nginx.

- - - -# 清除nginx的deployment以及service -kubectl delete deploy,svc nginx -``` - -- 测试HPA自动扩展 - -```sh -# 创建测试服务 -kubectl run nginx-server --requests=cpu=10m --image=nginx --port=80 -kubectl expose deployment nginx-server --port=80 - -# 创建hpa -kubectl autoscale deployment nginx-server --cpu-percent=10 --min=1 --max=10 -kubectl get hpa -kubectl describe hpa nginx-server - -# 给测试服务增加负载 -kubectl run -ti --rm load-generator --image=busybox -- ash -wget -q -O- http://nginx-server.default.svc.cluster.local > /dev/null -while true; do wget -q -O- http://nginx-server.default.svc.cluster.local > /dev/null; done - -# 检查hpa自动扩展情况,一般需要等待几分钟。结束增加负载后,pod自动缩容(自动缩容需要大概10-15分钟) -kubectl get hpa -w - -# 删除测试数据 -kubectl delete deploy,svc,hpa nginx-server -``` - ---- - -[返回目录](#目录) - -- 至此kubernetes高可用集群完成部署,并测试通过 😃 diff --git a/traefik/step1-traefik-rbac.yaml b/traefik/step1-traefik-rbac.yaml deleted file mode 100644 index 3206874..0000000 --- a/traefik/step1-traefik-rbac.yaml +++ /dev/null @@ -1,45 +0,0 @@ ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: traefik-ingress-controller - namespace: kube-system - ---- -kind: ClusterRole -apiVersion: rbac.authorization.k8s.io/v1beta1 -metadata: - name: traefik-ingress-controller -rules: - - apiGroups: - - "" - resources: - - services - - endpoints - - secrets - verbs: - - get - - list - - watch - - apiGroups: - - extensions - resources: - - ingresses - verbs: - - get - - list - - watch - ---- -kind: ClusterRoleBinding -apiVersion: rbac.authorization.k8s.io/v1beta1 -metadata: - name: traefik-ingress-controller -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: traefik-ingress-controller -subjects: -- kind: ServiceAccount - name: traefik-ingress-controller - namespace: kube-system diff --git a/traefik/step2-traefik-ds.yaml b/traefik/step2-traefik-ds.yaml deleted file mode 100644 index 8fe9b3f..0000000 --- a/traefik/step2-traefik-ds.yaml +++ /dev/null @@ -1,75 +0,0 @@ ---- -kind: ConfigMap -apiVersion: v1 -metadata: - name: traefik-conf - namespace: kube-system -data: - traefik.toml: |+ - defaultEntryPoints = ["http", "https"] - [entryPoints] - [entryPoints.http] - address = ":80" - # [entryPoints.http.redirect] - # entryPoint = "https" - [entryPoints.https] - address = ":443" - [entryPoints.https.tls] - [[entryPoints.https.tls.certificates]] - certFile = "/ssl/tls.crt" - keyFile = "/ssl/tls.key" - ---- -kind: DaemonSet -apiVersion: extensions/v1beta1 -metadata: - name: traefik-ingress-controller - namespace: kube-system - labels: - k8s-app: traefik-ingress-lb -spec: - template: - metadata: - labels: - k8s-app: traefik-ingress-lb - name: traefik-ingress-lb - spec: - volumes: - - name: traefik-cert - secret: - secretName: traefik-cert - - name: traefik-conf - configMap: - name: traefik-conf - serviceAccountName: traefik-ingress-controller - terminationGracePeriodSeconds: 60 - containers: - - image: traefik:v1.6.3 - name: traefik-ingress-lb - ports: - - name: http - containerPort: 80 - hostPort: 80 - - name: https - containerPort: 443 - hostPort: 443 - - name: admin - containerPort: 8080 - hostPort: 8080 - volumeMounts: - - mountPath: "/ssl" - name: "traefik-cert" - - mountPath: "/config" - name: "traefik-conf" - securityContext: - capabilities: - drop: - - ALL - add: - - NET_BIND_SERVICE - args: - - --api - - --kubernetes - - --logLevel=INFO - - --configfile=/config/traefik.toml - diff --git a/traefik/step3-traefik-service.yaml b/traefik/step3-traefik-service.yaml deleted file mode 100644 index e1272b7..0000000 --- a/traefik/step3-traefik-service.yaml +++ /dev/null @@ -1,15 +0,0 @@ ---- -apiVersion: v1 -kind: Service -metadata: - name: traefik-web-ui - namespace: kube-system -spec: - selector: - k8s-app: traefik-ingress-lb - type: NodePort - ports: - - port: 8080 - targetPort: 8080 - nodePort: 30011 - diff --git a/traefik/step4-traefik-ingress.yaml b/traefik/step4-traefik-ingress.yaml deleted file mode 100644 index fa5c30a..0000000 --- a/traefik/step4-traefik-ingress.yaml +++ /dev/null @@ -1,23 +0,0 @@ ---- -apiVersion: extensions/v1beta1 -kind: Ingress -metadata: - name: traefik-jenkins - namespace: default - annotations: - kubernetes.io/ingress.class: traefik - # ingress.kubernetes.io/auth-type: "basic" - # ingress.kubernetes.io/auth-secret: "traefik" - # traefik.frontend.rule.type: AddPrefix - # traefik.ingress.kubernetes.io/rewrite-target: / -spec: - rules: - - host: k8s-master-lb - http: - paths: - - path: / - backend: - serviceName: jenkins - servicePort: 8080 - # tls: - # - secretName: traefik-jenkins-tls-cert diff --git a/v1.6/README.md b/v1.6/README.md deleted file mode 100644 index c98ca8b..0000000 --- a/v1.6/README.md +++ /dev/null @@ -1,1226 +0,0 @@ -# kubeadm-highavailiability - kubernetes high availiability deployment based on kubeadm, for Kubernetes version v1.11.x/v1.9.x/v1.7.x/v1.6.x - -![k8s logo](../images/Kubernetes.png) - -- [中文文档(for v1.11.x版本)](../README_CN.md) -- [English document(for v1.11.x version)](../README.md) -- [中文文档(for v1.9.x版本)](../v1.9/README_CN.md) -- [English document(for v1.9.x version)](../v1.9/README.md) -- [中文文档(for v1.7.x版本)](../v1.7/README_CN.md) -- [English document(for v1.7.x version)](../v1.7/README.md) -- [中文文档(for v1.6.x版本)](../v1.6/README_CN.md) -- [English document(for v1.6.x version)](../v1.6/README.md) - ---- - -- [GitHub project URL](https://github.com/cookeem/kubeadm-ha/) -- [OSChina project URL](https://git.oschina.net/cookeem/kubeadm-ha/) - ---- - -- This operation instruction is for version v1.6.x kubernetes cluster - -### category - -1. [deployment architecture](#deployment-architecture) - 1. [deployment architecture summary](#deployment-architecture-summary) - 1. [detail deployment architecture](#detail-deployment-architecture) - 1. [hosts list](#hosts-list) -1. [prerequisites](#prerequisites) - 1. [version info](#version-info) - 1. [required docker images](#required-docker-images) - 1. [system configuration](#system-configuration) -1. [kubernetes installation](#kubernetes-installation) - 1. [kubernetes and related services installation](#kubernetes-and-related-services-installation) - 1. [load docker images](#load-docker-images) -1. [use kubeadm to init first master](#use-kubeadm-to-init-first-master) - 1. [deploy independent etcd tls cluster](#deploy-independent-etcd-tls-cluster) - 1. [kubeadm init](#kubeadm-init) - 1. [install flannel networks addon](#install-flannel-networks-addon) - 1. [install dashboard addon](#install-dashboard-addon) - 1. [install heapster addon](#install-heapster-addon) -1. [kubernetes masters high avialiability configuration](#kubernetes-masters-high-avialiability-configuration) - 1. [copy configuration files](#copy-configuration-files) - 1. [create certificatie](#create-certificatie) - 1. [edit configuration files](#edit-configuration-files) - 1. [verify master high avialiability](#verify-master-high-avialiability) - 1. [keepalived installation](#keepalived-installation) - 1. [nginx load balancer configuration](#nginx-load-balancer-configuration) - 1. [kube-proxy configuration](#kube-proxy-configuration) - 1. [verfify master high avialiability with keepalived](#verfify-master-high-avialiability-with-keepalived) -1. [k8s-nodes join the kubernetes cluster](#k8s-nodes-join-the-kubernetes-cluster) - 1. [use kubeadm to join the cluster](#use-kubeadm-to-join-the-cluster) - 1. [deploy nginx application to verify installation](#deploy-nginx-application-to-verify-installation) - - -### deployment architecture - -#### deployment architecture summary - -![ha logo](../images/ha.png) - ---- -[category](#category) - -#### detail deployment architecture - -![k8s ha](../images/k8s-ha.png) - -* kubernetes components: - -> kube-apiserver: exposes the Kubernetes API. It is the front-end for the Kubernetes control plane. It is designed to scale horizontally – that is, it scales by deploying more instances. - -> etcd: is used as Kubernetes’ backing store. All cluster data is stored here. Always have a backup plan for etcd’s data for your Kubernetes cluster. - - -> kube-scheduler: watches newly created pods that have no node assigned, and selects a node for them to run on. - - -> kube-controller-manager: runs controllers, which are the background threads that handle routine tasks in the cluster. Logically, each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process. - -> kubelet: is the primary node agent. It watches for pods that have been assigned to its node (either by apiserver or via local configuration file) - -> kube-proxy: enables the Kubernetes service abstraction by maintaining network rules on the host and performing connection forwarding. - - -* load balancer - -> keepalived cluster config a virtual IP address (192.168.60.80), this virtual IP address point to k8s-master1, k8s-master2, k8s-master3. - -> nginx service as the load balancer of k8s-master1, k8s-master2, k8s-master3's apiserver. The other nodes kubernetes services connect the keepalived virtual ip address (192.168.60.80) and nginx exposed port (8443) to communicate with the master cluster's apiservers. - ---- -[category](#category) - -#### hosts list - - HostName | IPAddress | Notes | Components - :--- | :--- | :--- | :--- - k8s-master1 | 192.168.60.71 | master node 1 | keepalived, nginx, etcd, kubelet, kube-apiserver, kube-scheduler, kube-proxy, kube-dashboard, heapster - k8s-master2 | 192.168.60.72 | master node 2 | keepalived, nginx, etcd, kubelet, kube-apiserver, kube-scheduler, kube-proxy, kube-dashboard, heapster - k8s-master3 | 192.168.60.73 | master node 3 | keepalived, nginx, etcd, kubelet, kube-apiserver, kube-scheduler, kube-proxy, kube-dashboard, heapster - N/A | 192.168.60.80 | keepalived virtual IP | N/A - k8s-node1 ~ 8 | 192.168.60.81 ~ 88 | 8 worker nodes | kubelet, kube-proxy - ---- -[category](#category) - -### prerequisites - -#### version info - -* Linux version: CentOS 7.3.1611 - -``` -cat /etc/redhat-release -CentOS Linux release 7.3.1611 (Core) -``` - -* docker version: 1.12.6 - -``` -$ docker version -Client: - Version: 1.12.6 - API version: 1.24 - Go version: go1.6.4 - Git commit: 78d1802 - Built: Tue Jan 10 20:20:01 2017 - OS/Arch: linux/amd64 - -Server: - Version: 1.12.6 - API version: 1.24 - Go version: go1.6.4 - Git commit: 78d1802 - Built: Tue Jan 10 20:20:01 2017 - OS/Arch: linux/amd64 -``` - -* kubeadm version: v1.6.4 - -``` -$ kubeadm version -kubeadm version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:33:17Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"} -``` - -* kubelet version: v1.6.4 - -``` -$ kubelet --version -Kubernetes v1.6.4 -``` - ---- - -[category](#category) - -#### required docker images - -* on your local laptop MacOSX: pull related docker images - -``` -$ docker pull gcr.io/google_containers/kube-apiserver-amd64:v1.6.4 -$ docker pull gcr.io/google_containers/kube-proxy-amd64:v1.6.4 -$ docker pull gcr.io/google_containers/kube-controller-manager-amd64:v1.6.4 -$ docker pull gcr.io/google_containers/kube-scheduler-amd64:v1.6.4 -$ docker pull gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.1 -$ docker pull quay.io/coreos/flannel:v0.7.1-amd64 -$ docker pull gcr.io/google_containers/heapster-amd64:v1.3.0 -$ docker pull gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.1 -$ docker pull gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.1 -$ docker pull gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.1 -$ docker pull gcr.io/google_containers/etcd-amd64:3.0.17 -$ docker pull gcr.io/google_containers/heapster-grafana-amd64:v4.0.2 -$ docker pull gcr.io/google_containers/heapster-influxdb-amd64:v1.1.1 -$ docker pull nginx:latest -$ docker pull gcr.io/google_containers/pause-amd64:3.0 -``` - -* on your local laptop MacOSX: clone codes from git and change working directory in codes - -``` -$ git clone https://github.com/cookeem/kubeadm-ha -$ cd kubeadm-ha -``` - -* on your local laptop MacOSX: save related docker images in docker-images directory - -``` -$ mkdir -p docker-images -$ docker save -o docker-images/kube-apiserver-amd64 gcr.io/google_containers/kube-apiserver-amd64:v1.6.4 -$ docker save -o docker-images/kube-proxy-amd64 gcr.io/google_containers/kube-proxy-amd64:v1.6.4 -$ docker save -o docker-images/kube-controller-manager-amd64 gcr.io/google_containers/kube-controller-manager-amd64:v1.6.4 -$ docker save -o docker-images/kube-scheduler-amd64 gcr.io/google_containers/kube-scheduler-amd64:v1.6.4 -$ docker save -o docker-images/kubernetes-dashboard-amd64 gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.1 -$ docker save -o docker-images/flannel quay.io/coreos/flannel:v0.7.1-amd64 -$ docker save -o docker-images/heapster-amd64 gcr.io/google_containers/heapster-amd64:v1.3.0 -$ docker save -o docker-images/k8s-dns-sidecar-amd64 gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.1 -$ docker save -o docker-images/k8s-dns-kube-dns-amd64 gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.1 -$ docker save -o docker-images/k8s-dns-dnsmasq-nanny-amd64 gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.1 -$ docker save -o docker-images/etcd-amd64 gcr.io/google_containers/etcd-amd64:3.0.17 -$ docker save -o docker-images/heapster-grafana-amd64 gcr.io/google_containers/heapster-grafana-amd64:v4.0.2 -$ docker save -o docker-images/heapster-influxdb-amd64 gcr.io/google_containers/heapster-influxdb-amd64:v1.1.1 -$ docker save -o docker-images/pause-amd64 gcr.io/google_containers/pause-amd64:3.0 -$ docker save -o docker-images/nginx nginx:latest -``` - -* on your local laptop MacOSX: copy all codes and docker images directory to all kubernetes nodes - -``` -$ scp -r * root@k8s-master1:/root/kubeadm-ha -$ scp -r * root@k8s-master2:/root/kubeadm-ha -$ scp -r * root@k8s-master3:/root/kubeadm-ha -$ scp -r * root@k8s-node1:/root/kubeadm-ha -$ scp -r * root@k8s-node2:/root/kubeadm-ha -$ scp -r * root@k8s-node3:/root/kubeadm-ha -$ scp -r * root@k8s-node4:/root/kubeadm-ha -$ scp -r * root@k8s-node5:/root/kubeadm-ha -$ scp -r * root@k8s-node6:/root/kubeadm-ha -$ scp -r * root@k8s-node7:/root/kubeadm-ha -$ scp -r * root@k8s-node8:/root/kubeadm-ha -``` - ---- -[category](#category) - -#### system configuration - -* on all kubernetes nodes: add kubernetes' repository - -``` -$ cat < /etc/yum.repos.d/kubernetes.repo -[kubernetes] -name=Kubernetes -baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 -enabled=1 -gpgcheck=1 -repo_gpgcheck=1 -gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg - https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg -EOF -``` - -* on all kubernetes nodes: use yum to update system - -``` -$ yum update -y -``` - -* on all kubernetes nodes: turn off firewalld service - -``` -$ systemctl disable firewalld && systemctl stop firewalld && systemctl status firewalld -``` - -* on all kubernetes nodes: set SELINUX to permissive mode - -``` -$ vi /etc/selinux/config -SELINUX=permissive -``` - -* on all kubernetes nodes: set iptables parameters - -``` -$ vi /etc/sysctl.d/k8s.conf -net.bridge.bridge-nf-call-iptables = 1 -net.bridge.bridge-nf-call-ip6tables = 1 -``` - -* on all kubernetes nodes: reboot host - -``` -$ reboot -``` - ---- -[category](#category) - -### kubernetes installation - -#### kubernetes and related services installation - -* on all kubernetes nodes: check SELINUX mode must set as permissive mode - -``` -$ getenforce -Permissive -``` - -* on all kubernetes nodes: install kubernetes and related services, then start up kubelet and docker daemon - -``` -$ yum search docker --showduplicates -$ yum install docker-1.12.6-16.el7.centos.x86_64 - -$ yum search kubelet --showduplicates -$ yum install kubelet-1.6.4-0.x86_64 - -$ yum search kubeadm --showduplicates -$ yum install kubeadm-1.6.4-0.x86_64 - -$ yum search kubernetes-cni --showduplicates -$ yum install kubernetes-cni-0.5.1-0.x86_64 - -$ systemctl enable docker && systemctl start docker -$ systemctl enable kubelet && systemctl start kubelet -``` - ---- -[category](#category) - -#### load docker images - -* on all kubernetes nodes: load docker images - -``` -$ docker load -i /root/kubeadm-ha/docker-images/etcd-amd64 -$ docker load -i /root/kubeadm-ha/docker-images/flannel -$ docker load -i /root/kubeadm-ha/docker-images/heapster-amd64 -$ docker load -i /root/kubeadm-ha/docker-images/heapster-grafana-amd64 -$ docker load -i /root/kubeadm-ha/docker-images/heapster-influxdb-amd64 -$ docker load -i /root/kubeadm-ha/docker-images/k8s-dns-dnsmasq-nanny-amd64 -$ docker load -i /root/kubeadm-ha/docker-images/k8s-dns-kube-dns-amd64 -$ docker load -i /root/kubeadm-ha/docker-images/k8s-dns-sidecar-amd64 -$ docker load -i /root/kubeadm-ha/docker-images/kube-apiserver-amd64 -$ docker load -i /root/kubeadm-ha/docker-images/kube-controller-manager-amd64 -$ docker load -i /root/kubeadm-ha/docker-images/kube-proxy-amd64 -$ docker load -i /root/kubeadm-ha/docker-images/kubernetes-dashboard-amd64 -$ docker load -i /root/kubeadm-ha/docker-images/kube-scheduler-amd64 -$ docker load -i /root/kubeadm-ha/docker-images/pause-amd64 -$ docker load -i /root/kubeadm-ha/docker-images/nginx - -$ docker images -REPOSITORY TAG IMAGE ID CREATED SIZE -gcr.io/google_containers/kube-apiserver-amd64 v1.6.4 4e3810a19a64 5 weeks ago 150.6 MB -gcr.io/google_containers/kube-proxy-amd64 v1.6.4 e073a55c288b 5 weeks ago 109.2 MB -gcr.io/google_containers/kube-controller-manager-amd64 v1.6.4 0ea16a85ac34 5 weeks ago 132.8 MB -gcr.io/google_containers/kube-scheduler-amd64 v1.6.4 1fab9be555e1 5 weeks ago 76.75 MB -gcr.io/google_containers/kubernetes-dashboard-amd64 v1.6.1 71dfe833ce74 6 weeks ago 134.4 MB -quay.io/coreos/flannel v0.7.1-amd64 cd4ae0be5e1b 10 weeks ago 77.76 MB -gcr.io/google_containers/heapster-amd64 v1.3.0 f9d33bedfed3 3 months ago 68.11 MB -gcr.io/google_containers/k8s-dns-sidecar-amd64 1.14.1 fc5e302d8309 4 months ago 44.52 MB -gcr.io/google_containers/k8s-dns-kube-dns-amd64 1.14.1 f8363dbf447b 4 months ago 52.36 MB -gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64 1.14.1 1091847716ec 4 months ago 44.84 MB -gcr.io/google_containers/etcd-amd64 3.0.17 243830dae7dd 4 months ago 168.9 MB -gcr.io/google_containers/heapster-grafana-amd64 v4.0.2 a1956d2a1a16 5 months ago 131.5 MB -gcr.io/google_containers/heapster-influxdb-amd64 v1.1.1 d3fccbedd180 5 months ago 11.59 MB -nginx latest 01f818af747d 6 months ago 181.6 MB -gcr.io/google_containers/pause-amd64 3.0 99e59f495ffa 14 months ago 746.9 kB -``` - ---- -[category](#category) - -### use kubeadm to init first master - -#### deploy independent etcd tls cluster - -* on k8s-master1: use docker to start independent etcd tls cluster - -``` -$ docker stop etcd && docker rm etcd -$ rm -rf /var/lib/etcd-cluster -$ mkdir -p /var/lib/etcd-cluster -$ docker run -d \ ---restart always \ --v /etc/ssl/certs:/etc/ssl/certs \ --v /var/lib/etcd-cluster:/var/lib/etcd \ --p 4001:4001 \ --p 2380:2380 \ --p 2379:2379 \ ---name etcd \ -gcr.io/google_containers/etcd-amd64:3.0.17 \ -etcd --name=etcd0 \ ---advertise-client-urls=http://192.168.60.71:2379,http://192.168.60.71:4001 \ ---listen-client-urls=http://0.0.0.0:2379,http://0.0.0.0:4001 \ ---initial-advertise-peer-urls=http://192.168.60.71:2380 \ ---listen-peer-urls=http://0.0.0.0:2380 \ ---initial-cluster-token=9477af68bbee1b9ae037d6fd9e7efefd \ ---initial-cluster=etcd0=http://192.168.60.71:2380,etcd1=http://192.168.60.72:2380,etcd2=http://192.168.60.73:2380 \ ---initial-cluster-state=new \ ---auto-tls \ ---peer-auto-tls \ ---data-dir=/var/lib/etcd -``` - -* on k8s-master2: use docker to start independent etcd tls cluster - -``` -$ docker stop etcd && docker rm etcd -$ rm -rf /var/lib/etcd-cluster -$ mkdir -p /var/lib/etcd-cluster -$ docker run -d \ ---restart always \ --v /etc/ssl/certs:/etc/ssl/certs \ --v /var/lib/etcd-cluster:/var/lib/etcd \ --p 4001:4001 \ --p 2380:2380 \ --p 2379:2379 \ ---name etcd \ -gcr.io/google_containers/etcd-amd64:3.0.17 \ -etcd --name=etcd1 \ ---advertise-client-urls=http://192.168.60.72:2379,http://192.168.60.72:4001 \ ---listen-client-urls=http://0.0.0.0:2379,http://0.0.0.0:4001 \ ---initial-advertise-peer-urls=http://192.168.60.72:2380 \ ---listen-peer-urls=http://0.0.0.0:2380 \ ---initial-cluster-token=9477af68bbee1b9ae037d6fd9e7efefd \ ---initial-cluster=etcd0=http://192.168.60.71:2380,etcd1=http://192.168.60.72:2380,etcd2=http://192.168.60.73:2380 \ ---initial-cluster-state=new \ ---auto-tls \ ---peer-auto-tls \ ---data-dir=/var/lib/etcd -``` - -* on k8s-master3: use docker to start independent etcd tls cluster - -``` -$ docker stop etcd && docker rm etcd -$ rm -rf /var/lib/etcd-cluster -$ mkdir -p /var/lib/etcd-cluster -$ docker run -d \ ---restart always \ --v /etc/ssl/certs:/etc/ssl/certs \ --v /var/lib/etcd-cluster:/var/lib/etcd \ --p 4001:4001 \ --p 2380:2380 \ --p 2379:2379 \ ---name etcd \ -gcr.io/google_containers/etcd-amd64:3.0.17 \ -etcd --name=etcd2 \ ---advertise-client-urls=http://192.168.60.73:2379,http://192.168.60.73:4001 \ ---listen-client-urls=http://0.0.0.0:2379,http://0.0.0.0:4001 \ ---initial-advertise-peer-urls=http://192.168.60.73:2380 \ ---listen-peer-urls=http://0.0.0.0:2380 \ ---initial-cluster-token=9477af68bbee1b9ae037d6fd9e7efefd \ ---initial-cluster=etcd0=http://192.168.60.71:2380,etcd1=http://192.168.60.72:2380,etcd2=http://192.168.60.73:2380 \ ---initial-cluster-state=new \ ---auto-tls \ ---peer-auto-tls \ ---data-dir=/var/lib/etcd -``` - -* on k8s-master1, k8s-master2, k8s-master3: check etcd cluster health - -``` -$ docker exec -ti etcd ash - -$ etcdctl member list -1a32c2d3f1abcad0: name=etcd2 peerURLs=http://192.168.60.73:2380 clientURLs=http://192.168.60.73:2379,http://192.168.60.73:4001 isLeader=false -1da4f4e8b839cb79: name=etcd1 peerURLs=http://192.168.60.72:2380 clientURLs=http://192.168.60.72:2379,http://192.168.60.72:4001 isLeader=false -4238bcb92d7f2617: name=etcd0 peerURLs=http://192.168.60.71:2380 clientURLs=http://192.168.60.71:2379,http://192.168.60.71:4001 isLeader=true - -$ etcdctl cluster-health -member 1a32c2d3f1abcad0 is healthy: got healthy result from http://192.168.60.73:2379 -member 1da4f4e8b839cb79 is healthy: got healthy result from http://192.168.60.72:2379 -member 4238bcb92d7f2617 is healthy: got healthy result from http://192.168.60.71:2379 -cluster is healthy - -$ exit -``` - ---- -[category](#category) - -#### kubeadm init - -* on k8s-master1: edit kubeadm-init-v1.6.x.yaml file, set etcd.endpoints.${HOST_IP} to k8s-master1, k8s-master2, k8s-master3's IP address - -``` -$ vi /root/kubeadm-ha/kubeadm-init-v1.6.x.yaml -apiVersion: kubeadm.k8s.io/v1alpha1 -kind: MasterConfiguration -kubernetesVersion: v1.6.4 -networking: - podSubnet: 10.244.0.0/16 -etcd: - endpoints: - - http://192.168.60.71:2379 - - http://192.168.60.72:2379 - - http://192.168.60.73:2379 -``` - -* if kubeadm init stuck at tips below, that may because cgroup-driver parameters different with your docker service's setting -* [apiclient] Created API client, waiting for the control plane to become ready -* use "journalctl -t kubelet -S '2017-06-08'" to check logs, and you will find error below: -* error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "systemd" -* you must change "KUBELET_CGROUP_ARGS=--cgroup-driver=systemd" to "KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs" - -``` -$ vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf -#Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd" -Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs" - -$ systemctl daemon-reload && systemctl restart kubelet -``` - -* on k8s-master1: use kubeadm to init kubernetes cluster and connect external etcd cluster - -``` -$ kubeadm init --config=/root/kubeadm-ha/kubeadm-init-v1.6.x.yaml -``` - -* on k8s-master1: set environment variables $KUBECONFIG, make kubectl connect kubelet - -``` -$ vi ~/.bashrc -export KUBECONFIG=/etc/kubernetes/admin.conf - -$ source ~/.bashrc -``` - ---- -[category](#category) - -#### install flannel networks addon - -* on k8s-master1: install flannel networks addon, otherwise kube-dns pod will keep status at ContainerCreating - -``` -$ kubectl create -f /root/kubeadm-ha/kube-flannel -clusterrole "flannel" created -clusterrolebinding "flannel" created -serviceaccount "flannel" created -configmap "kube-flannel-cfg" created -daemonset "kube-flannel-ds" created -``` - -* on k8s-master1: after flannel networks addon installed, wait about 3 minutes, then all pods status are Running - -``` -$ kubectl get pods --all-namespaces -o wide -NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE -kube-system kube-apiserver-k8s-master1 1/1 Running 0 3m 192.168.60.71 k8s-master1 -kube-system kube-controller-manager-k8s-master1 1/1 Running 0 3m 192.168.60.71 k8s-master1 -kube-system kube-dns-3913472980-k9mt6 3/3 Running 0 4m 10.244.0.104 k8s-master1 -kube-system kube-flannel-ds-3hhjd 2/2 Running 0 1m 192.168.60.71 k8s-master1 -kube-system kube-proxy-rzq3t 1/1 Running 0 4m 192.168.60.71 k8s-master1 -kube-system kube-scheduler-k8s-master1 1/1 Running 0 3m 192.168.60.71 k8s-master1 -``` - ---- -[category](#category) - -#### install dashboard addon - -* on k8s-master1: install dashboard webUI addon - -``` -$ kubectl create -f /root/kubeadm-ha/kube-dashboard/ -serviceaccount "kubernetes-dashboard" created -clusterrolebinding "kubernetes-dashboard" created -deployment "kubernetes-dashboard" created -service "kubernetes-dashboard" created -``` - -* on k8s-master1: start up proxy - -``` -$ kubectl proxy --address='0.0.0.0' & -``` - -* on your local laptop MacOSX: use browser to check dashboard work correctly - -``` -http://k8s-master1:30000 -``` - -![dashboard](images/dashboard.png) - ---- -[category](#category) - -#### install heapster addon - -* on k8s-master1: make master be able to schedule pods - -``` -$ kubectl taint nodes --all node-role.kubernetes.io/master- -node "k8s-master1" tainted -``` - -* on k8s-master1: install heapster addon, the performance monitor addon - -``` -$ kubectl create -f /root/kubeadm-ha/kube-heapster -``` - -* on k8s-master1: restart docker and kubelet service, to make heapster work immediately - -``` -$ systemctl restart docker kubelet -``` - -* on k8s-master1: check pods status - -``` -$ kubectl get all --all-namespaces -o wide -NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE -kube-system heapster-783524908-kn6jd 1/1 Running 1 9m 10.244.0.111 k8s-master1 -kube-system kube-apiserver-k8s-master1 1/1 Running 1 15m 192.168.60.71 k8s-master1 -kube-system kube-controller-manager-k8s-master1 1/1 Running 1 15m 192.168.60.71 k8s-master1 -kube-system kube-dns-3913472980-k9mt6 3/3 Running 3 16m 10.244.0.110 k8s-master1 -kube-system kube-flannel-ds-3hhjd 2/2 Running 3 13m 192.168.60.71 k8s-master1 -kube-system kube-proxy-rzq3t 1/1 Running 1 16m 192.168.60.71 k8s-master1 -kube-system kube-scheduler-k8s-master1 1/1 Running 1 15m 192.168.60.71 k8s-master1 -kube-system kubernetes-dashboard-2039414953-d46vw 1/1 Running 1 11m 10.244.0.109 k8s-master1 -kube-system monitoring-grafana-3975459543-8l94z 1/1 Running 1 9m 10.244.0.112 k8s-master1 -kube-system monitoring-influxdb-3480804314-72ltf 1/1 Running 1 9m 10.244.0.113 k8s-master1 -``` - -* on your local laptop MacOSX: use browser to check dashboard, if it show CPU and Memory Usage info, then heapster work! - -``` -http://k8s-master1:30000 -``` - -![heapster](images/heapster.png) - -* now we finish the first kubernetes master installation, and flannel dashboard heapster work on master correctly - ---- -[category](#category) - -### kubernetes masters high avialiability configuration - -#### copy configuration files - -* on k8s-master1: copy /etc/kubernetes/ directory to k8s-master2 and k8s-master3 - -``` -scp -r /etc/kubernetes/ k8s-master2:/etc/ -scp -r /etc/kubernetes/ k8s-master3:/etc/ -``` - -* on k8s-master2, k8s-master3: restart kubelet service, and make sure kubelet status is active (running) - -``` -$ systemctl daemon-reload && systemctl restart kubelet - -$ systemctl status kubelet -● kubelet.service - kubelet: The Kubernetes Node Agent - Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled) - Drop-In: /etc/systemd/system/kubelet.service.d - └─10-kubeadm.conf - Active: active (running) since Tue 2017-06-27 16:24:22 CST; 1 day 17h ago - Docs: http://kubernetes.io/docs/ - Main PID: 2780 (kubelet) - Memory: 92.9M - CGroup: /system.slice/kubelet.service - ├─2780 /usr/bin/kubelet --kubeconfig=/etc/kubernetes/kubelet.conf --require-... - └─2811 journalctl -k -f -``` - -* on k8s-master2, k8s-master3: set environment variables $KUBECONFIG, make kubectl connect kubelet - -``` -$ vi ~/.bashrc -export KUBECONFIG=/etc/kubernetes/admin.conf - -$ source ~/.bashrc -``` - -* on k8s-master2, k8s-master3: check nodes status, you will found that k8s-master2 and k8s-master3 are joined - -``` -$ kubectl get nodes -o wide -NAME STATUS AGE VERSION EXTERNAL-IP OS-IMAGE KERNEL-VERSION -k8s-master1 Ready 26m v1.6.4 CentOS Linux 7 (Core) 3.10.0-514.6.1.el7.x86_64 -k8s-master2 Ready 2m v1.6.4 CentOS Linux 7 (Core) 3.10.0-514.21.1.el7.x86_64 -k8s-master3 Ready 2m v1.6.4 CentOS Linux 7 (Core) 3.10.0-514.21.1.el7.x86_64 -``` - -* on k8s-master2, k8s-master3: edit kube-apiserver.yaml file, replace ${HOST_IP} to current host's IP address - -``` -$ vi /etc/kubernetes/manifests/kube-apiserver.yaml - - --advertise-address=${HOST_IP} -``` - -* on k8s-master2, k8s-master3: edit kubelet.conf file, replace ${HOST_IP} to current host's IP address - -``` -$ vi /etc/kubernetes/kubelet.conf -server: https://${HOST_IP}:6443 -``` - -* on k8s-master2, k8s-master3: restart docker and kubelet services - -``` -$ systemctl daemon-reload && systemctl restart docker kubelet -``` - ---- -[category](#category) - -#### create certificatie - -* on k8s-master2, k8s-master3: after kubelet.conf modified, because IP address in apiserver.crt and apiserver.key file are different from kubelet.conf, kubelet service will stop, you must use ca.crt and ca.key to re-sign your certificates, check apiserver.crt cerfificate info: - -``` -openssl x509 -noout -text -in /etc/kubernetes/pki/apiserver.crt -Certificate: - Data: - Version: 3 (0x2) - Serial Number: 9486057293403496063 (0x83a53ed95c519e7f) - Signature Algorithm: sha1WithRSAEncryption - Issuer: CN=kubernetes - Validity - Not Before: Jun 22 16:22:44 2017 GMT - Not After : Jun 22 16:22:44 2018 GMT - Subject: CN=kube-apiserver, - Subject Public Key Info: - Public Key Algorithm: rsaEncryption - Public-Key: (2048 bit) - Modulus: - d0:10:4a:3b:c4:62:5d:ae:f8:f1:16:48:b3:77:6b: - 53:4b - Exponent: 65537 (0x10001) - X509v3 extensions: - X509v3 Subject Alternative Name: - DNS:k8s-master1, DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local, IP Address:10.96.0.1, IP Address:192.168.60.71 - Signature Algorithm: sha1WithRSAEncryption - dd:68:16:f9:11:be:c3:3c:be:89:9f:14:60:6b:e0:47:c7:91: - 9e:78:ab:ce -``` - -* on k8s-master1, k8s-master2, k8s-master3: use ca.key and ca.crt to create apiserver.crt and apiserver.key - -``` -$ mkdir -p /etc/kubernetes/pki-local - -$ cd /etc/kubernetes/pki-local -``` - -* on k8s-master1, k8s-master2, k8s-master3: create a new apiserver.key - -``` -$ openssl genrsa -out apiserver.key 2048 -``` - -* on k8s-master1, k8s-master2, k8s-master3: create a new apiserver.csr file - -``` -$ openssl req -new -key apiserver.key -subj "/CN=kube-apiserver," -out apiserver.csr -``` - -* on k8s-master1, k8s-master2, k8s-master3: edit apiserver.ext file, replace ${HOST_IP} to current host's IP address, replace ${VIRTUAL_IP} to keepalived virtual IP(192.168.60.80) - -``` -$ vi apiserver.ext -subjectAltName = DNS:${HOST_NAME},DNS:kubernetes,DNS:kubernetes.default,DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local, IP:10.96.0.1, IP:${HOST_IP}, IP:${VIRTUAL_IP} -``` - -* on k8s-master1, k8s-master2, k8s-master3: use ca.key and ca.crt to create apiserver.crt file - -``` -$ openssl x509 -req -in apiserver.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out apiserver.crt -days 365 -extfile /etc/kubernetes/pki-local/apiserver.ext -``` - -* on k8s-master1, k8s-master2, k8s-master3: check the new certificate: - -``` -$ openssl x509 -noout -text -in apiserver.crt -Certificate: - Data: - Version: 3 (0x2) - Serial Number: 9486057293403496063 (0x83a53ed95c519e7f) - Signature Algorithm: sha1WithRSAEncryption - Issuer: CN=kubernetes - Validity - Not Before: Jun 22 16:22:44 2017 GMT - Not After : Jun 22 16:22:44 2018 GMT - Subject: CN=kube-apiserver, - Subject Public Key Info: - Public Key Algorithm: rsaEncryption - Public-Key: (2048 bit) - Modulus: - d0:10:4a:3b:c4:62:5d:ae:f8:f1:16:48:b3:77:6b: - 53:4b - Exponent: 65537 (0x10001) - X509v3 extensions: - X509v3 Subject Alternative Name: - DNS:k8s-master3, DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local, IP Address:10.96.0.1, IP Address:192.168.60.73, IP Address:192.168.60.80 - Signature Algorithm: sha1WithRSAEncryption - dd:68:16:f9:11:be:c3:3c:be:89:9f:14:60:6b:e0:47:c7:91: - 9e:78:ab:ce -``` - -* on k8s-master1, k8s-master2, k8s-master3: copy apiserver.crt and apiserver.key to /etc/kubernetes/pki directory - -``` -$ cp apiserver.crt apiserver.key /etc/kubernetes/pki/ -``` - ---- -[category](#category) - -#### edit configuration files - -* on k8s-master2, k8s-master3: edit admin.conf file, replace ${HOST_IP} to current host's IP address - -``` -$ vi /etc/kubernetes/admin.conf - server: https://${HOST_IP}:6443 -``` - -* on k8s-master2, k8s-master3: edit controller-manager.conf file, replace ${HOST_IP} to current host's IP address - -``` -$ vi /etc/kubernetes/controller-manager.conf - server: https://${HOST_IP}:6443 -``` - -* on k8s-master2, k8s-master3: edit scheduler.conf file, replace ${HOST_IP} to current host's IP address - -``` -$ vi /etc/kubernetes/scheduler.conf - server: https://${HOST_IP}:6443 -``` - -* on k8s-master1, k8s-master2, k8s-master3: restart docker and kubelet services - -``` -$ systemctl daemon-reload && systemctl restart docker kubelet -``` - ---- -[category](#category) - -#### verify master high avialiability - -* on k8s-master1 or k8s-master2 or k8s-master3: check all master nodes pods startup status. apiserver controller-manager kube-scheduler proxy flannel running at k8s-master1, k8s-master2, k8s-master3 successfully. - -``` -$ kubectl get pod --all-namespaces -o wide | grep k8s-master2 -kube-system kube-apiserver-k8s-master2 1/1 Running 1 55s 192.168.60.72 k8s-master2 -kube-system kube-controller-manager-k8s-master2 1/1 Running 2 18m 192.168.60.72 k8s-master2 -kube-system kube-flannel-ds-t8gkh 2/2 Running 4 18m 192.168.60.72 k8s-master2 -kube-system kube-proxy-bpgqw 1/1 Running 1 18m 192.168.60.72 k8s-master2 -kube-system kube-scheduler-k8s-master2 1/1 Running 2 18m 192.168.60.72 k8s-master2 - -$ kubectl get pod --all-namespaces -o wide | grep k8s-master3 -kube-system kube-apiserver-k8s-master3 1/1 Running 1 1m 192.168.60.73 k8s-master3 -kube-system kube-controller-manager-k8s-master3 1/1 Running 2 18m 192.168.60.73 k8s-master3 -kube-system kube-flannel-ds-tmqmx 2/2 Running 4 18m 192.168.60.73 k8s-master3 -kube-system kube-proxy-4stg3 1/1 Running 1 18m 192.168.60.73 k8s-master3 -kube-system kube-scheduler-k8s-master3 1/1 Running 2 18m 192.168.60.73 k8s-master3 -``` - -* on k8s-master1 or k8s-master2 or k8s-master3: use kubectl logs to check controller-manager and scheduler's leader election result, only one is working - -``` -$ kubectl logs -n kube-system kube-controller-manager-k8s-master1 -$ kubectl logs -n kube-system kube-controller-manager-k8s-master2 -$ kubectl logs -n kube-system kube-controller-manager-k8s-master3 - -$ kubectl logs -n kube-system kube-scheduler-k8s-master1 -$ kubectl logs -n kube-system kube-scheduler-k8s-master2 -$ kubectl logs -n kube-system kube-scheduler-k8s-master3 -``` - -* on k8s-master1 or k8s-master2 or k8s-master3: check deployment - -``` -$ kubectl get deploy --all-namespaces -NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE -kube-system heapster 1 1 1 1 41m -kube-system kube-dns 1 1 1 1 48m -kube-system kubernetes-dashboard 1 1 1 1 43m -kube-system monitoring-grafana 1 1 1 1 41m -kube-system monitoring-influxdb 1 1 1 1 41m -``` - -* on k8s-master1 or k8s-master2 or k8s-master3: scale up kubernetes-dashboard and kube-dns replicas to 3, make all master running kubernetes-dashboard and kube-dns - -``` -$ kubectl scale --replicas=3 -n kube-system deployment/kube-dns -$ kubectl get pods --all-namespaces -o wide| grep kube-dns - -$ kubectl scale --replicas=3 -n kube-system deployment/kubernetes-dashboard -$ kubectl get pods --all-namespaces -o wide| grep kubernetes-dashboard - -$ kubectl scale --replicas=3 -n kube-system deployment/heapster -$ kubectl get pods --all-namespaces -o wide| grep heapster - -$ kubectl scale --replicas=3 -n kube-system deployment/monitoring-grafana -$ kubectl get pods --all-namespaces -o wide| grep monitoring-grafana - -$ kubectl scale --replicas=3 -n kube-system deployment/monitoring-influxdb -$ kubectl get pods --all-namespaces -o wide| grep monitoring-influxdb -``` ---- -[category](#category) - -#### keepalived installation - -* on k8s-master1, k8s-master2, k8s-master3: install keepalived service - -``` -$ yum install -y keepalived - -$ systemctl enable keepalived && systemctl restart keepalived -``` - -* on k8s-master1, k8s-master2, k8s-master3: backup keepalived config file - -``` -$ mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak -``` - -* on k8s-master1, k8s-master2, k8s-master3: create apiserver monitoring script, when apiserver failed keepalived will stop and virtual IP address will transfer to the other node - -``` -$ vi /etc/keepalived/check_apiserver.sh -#!/bin/bash -err=0 -for k in $( seq 1 10 ) -do - check_code=$(ps -ef|grep kube-apiserver | wc -l) - if [ "$check_code" = "1" ]; then - err=$(expr $err + 1) - sleep 5 - continue - else - err=0 - break - fi -done -if [ "$err" != "0" ]; then - echo "systemctl stop keepalived" - /usr/bin/systemctl stop keepalived - exit 1 -else - exit 0 -fi - -chmod a+x /etc/keepalived/check_apiserver.sh -``` - -* on k8s-master1, k8s-master2, k8s-master3: check the network interface name - -``` -$ ip a | grep 192.168.60 -``` - -* on k8s-master1, k8s-master2, k8s-master3: edit keepalived settings: -* state ${STATE}: is MASTER or BACKUP, only one node can set to MASTER -* interface ${INTERFACE_NAME}: which network interfaces will virtual IP address bind on -* mcast_src_ip ${HOST_IP}: current host IP address -* priority ${PRIORITY}: for example (102 or 101 or 100) -* ${VIRTUAL_IP}: the virtual IP address, here we set to 192.168.60.80 - -``` -$ vi /etc/keepalived/keepalived.conf -! Configuration File for keepalived -global_defs { - router_id LVS_DEVEL -} -vrrp_script chk_apiserver { - script "/etc/keepalived/check_apiserver.sh" - interval 2 - weight -5 - fall 3 - rise 2 -} -vrrp_instance VI_1 { - state ${STATE} - interface ${INTERFACE_NAME} - mcast_src_ip ${HOST_IP} - virtual_router_id 51 - priority ${PRIORITY} - advert_int 2 - authentication { - auth_type PASS - auth_pass 4be37dc3b4c90194d1600c483e10ad1d - } - virtual_ipaddress { - ${VIRTUAL_IP} - } - track_script { - chk_apiserver - } -} -``` - -* on k8s-master1, k8s-master2, k8s-master3: reboot keepalived service, and check virtual IP address work or not - -``` -$ systemctl restart keepalived -$ ping 192.168.60.80 -``` - ---- -[category](#category) - -#### nginx load balancer configuration - -* on k8s-master1, k8s-master2, k8s-master3: edit nginx-default.conf settings, replace ${HOST_IP} with k8s-master1, k8s-master2, k8s-master3's IP address. - -``` -$ vi /root/kubeadm-ha/nginx-default.conf -stream { - upstream apiserver { - server ${HOST_IP}:6443 weight=5 max_fails=3 fail_timeout=30s; - server ${HOST_IP}:6443 weight=5 max_fails=3 fail_timeout=30s; - server ${HOST_IP}:6443 weight=5 max_fails=3 fail_timeout=30s; - } - - server { - listen 8443; - proxy_connect_timeout 1s; - proxy_timeout 3s; - proxy_pass apiserver; - } -} -``` - -* on k8s-master1, k8s-master2, k8s-master3: use docker to start up nginx - -``` -$ docker run -d -p 8443:8443 \ ---name nginx-lb \ ---restart always \ --v /root/kubeadm-ha/nginx-default.conf:/etc/nginx/nginx.conf \ -nginx -``` - -* on k8s-master1, k8s-master2, k8s-master3: check keepalived and nginx - -``` -$ curl -L 192.168.60.80:8443 | wc -l - % Total % Received % Xferd Average Speed Time Time Time Current - Dload Upload Total Spent Left Speed -100 14 0 14 0 0 18324 0 --:--:-- --:--:-- --:--:-- 14000 -1 -``` - -* on k8s-master1, k8s-master2, k8s-master3: check keeplived logs, if it show logs below it means that virtual IP address bind on this host - -``` -$ systemctl status keepalived -l -VRRP_Instance(VI_1) Sending gratuitous ARPs on ens160 for 192.168.60.80 -``` - ---- -[category](#category) - -#### kube-proxy configuration - -* on k8s-master1: edit kube-proxy settings to use keepalived virtual IP address - -``` -$ kubectl get -n kube-system configmap -NAME DATA AGE -extension-apiserver-authentication 6 4h -kube-flannel-cfg 2 4h -kube-proxy 1 4h -``` - -* on k8s-master1: edit configmap/kube-proxy settings, replaces the IP address to keepalived's virtual IP address - -``` -$ kubectl edit -n kube-system configmap/kube-proxy - server: https://192.168.60.80:8443 -``` - -* on k8s-master1: check configmap/kube-proxy settings - -``` -$ kubectl get -n kube-system configmap/kube-proxy -o yaml -``` - -* on k8s-master1: delete all kube-proxy pods, kube-proxy pods will re-create automatically - -``` -kubectl get pods --all-namespaces -o wide | grep proxy -``` - -* on k8s-master1, k8s-master2, k8s-master3: restart docker kubelet keepalived services - -``` -$ systemctl restart docker kubelet keepalived -``` - ---- -[category](#category) - -#### verfify master high avialiability with keepalived - -* on k8s-master1: check each master nodes pods status - -``` -$ kubectl get pods --all-namespaces -o wide | grep k8s-master1 - -$ kubectl get pods --all-namespaces -o wide | grep k8s-master2 - -$ kubectl get pods --all-namespaces -o wide | grep k8s-master3 -``` - ---- -[category](#category) - -### k8s-nodes join the kubernetes cluster - -#### use kubeadm to join the cluster -* on k8s-master1: make master nodes scheduling pods disabled - -``` -$ kubectl patch node k8s-master1 -p '{"spec":{"unschedulable":true}}' - -$ kubectl patch node k8s-master2 -p '{"spec":{"unschedulable":true}}' - -$ kubectl patch node k8s-master3 -p '{"spec":{"unschedulable":true}}' -``` - -* on k8s-master1: list kubeadm token - -``` -$ kubeadm token list -TOKEN TTL EXPIRES USAGES DESCRIPTION -xxxxxx.yyyyyy authentication,signing The default bootstrap token generated by 'kubeadm init' -``` - -* on k8s-node1 ~ k8s-node8: use kubeadm to join the kubernetes cluster, replace ${TOKEN} with token show ahead, replace ${VIRTUAL_IP} with keepalived's virtual IP address (192.168.60.80) - -``` -$ kubeadm join --token ${TOKEN} ${VIRTUAL_IP}:8443 -``` - ---- -[category](#category) - -#### deploy nginx application to verify installation - -* on k8s-node1 ~ k8s-node8: check kubelet status - -``` -$ systemctl status kubelet -● kubelet.service - kubelet: The Kubernetes Node Agent - Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled) - Drop-In: /etc/systemd/system/kubelet.service.d - └─10-kubeadm.conf - Active: active (running) since Tue 2017-06-27 16:23:43 CST; 1 day 18h ago - Docs: http://kubernetes.io/docs/ - Main PID: 1146 (kubelet) - Memory: 204.9M - CGroup: /system.slice/kubelet.service - ├─ 1146 /usr/bin/kubelet --kubeconfig=/etc/kubernetes/kubelet.conf --require... - ├─ 2553 journalctl -k -f - ├─ 4988 /usr/sbin/glusterfs --log-level=ERROR --log-file=/var/lib/kubelet/pl... - └─14720 /usr/sbin/glusterfs --log-level=ERROR --log-file=/var/lib/kubelet/pl... -``` - -* on k8s-master1: list nodes status - -``` -$ kubectl get nodes -o wide -NAME STATUS AGE VERSION -k8s-master1 Ready,SchedulingDisabled 5h v1.6.4 -k8s-master2 Ready,SchedulingDisabled 4h v1.6.4 -k8s-master3 Ready,SchedulingDisabled 4h v1.6.4 -k8s-node1 Ready 6m v1.6.4 -k8s-node2 Ready 4m v1.6.4 -k8s-node3 Ready 4m v1.6.4 -k8s-node4 Ready 3m v1.6.4 -k8s-node5 Ready 3m v1.6.4 -k8s-node6 Ready 3m v1.6.4 -k8s-node7 Ready 3m v1.6.4 -k8s-node8 Ready 3m v1.6.4 -``` - -* on k8s-master1: deploy nginx service on kubernetes, it show that nginx service deploy on k8s-node5 - -``` -$ kubectl run nginx --image=nginx --port=80 -deployment "nginx" created - -$ kubectl get pod -o wide -l=run=nginx -NAME READY STATUS RESTARTS AGE IP NODE -nginx-2662403697-pbmwt 1/1 Running 0 5m 10.244.7.6 k8s-node5 -``` - -* on k8s-master1: expose nginx services port - -``` -$ kubectl expose deployment nginx --port=80 --target-port=80 --type=NodePort -service "nginx" exposed - -$ kubectl get svc -l=run=nginx -NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE -nginx 10.105.151.69 80:31639/TCP 43s - -$ curl k8s-master2:31639 - - - -Welcome to nginx! - - - -

Welcome to nginx!

-

If you see this page, the nginx web server is successfully installed and -working. Further configuration is required.

- -

For online documentation and support please refer to -nginx.org.
-Commercial support is available at -nginx.com.

- -

Thank you for using nginx.

- - - -``` - -* congratulation! kubernetes high availiability cluster deploy successfully 😀 ---- -[category](#category) - diff --git a/v1.6/README_CN.md b/v1.6/README_CN.md deleted file mode 100644 index 8ddcad4..0000000 --- a/v1.6/README_CN.md +++ /dev/null @@ -1,1237 +0,0 @@ -# kubeadm-highavailiability - 基于kubeadm的kubernetes高可用集群部署,支持v1.11.x v1.9.x v1.7.x v1.6.x版本 - -![k8s logo](../images/Kubernetes.png) - -- [中文文档(for v1.11.x版本)](../README_CN.md) -- [English document(for v1.11.x version)](../README.md) -- [中文文档(for v1.9.x版本)](../v1.9/README_CN.md) -- [English document(for v1.9.x version)](../v1.9/README.md) -- [中文文档(for v1.7.x版本)](../v1.7/README_CN.md) -- [English document(for v1.7.x version)](../v1.7/README.md) -- [中文文档(for v1.6.x版本)](../v1.6/README_CN.md) -- [English document(for v1.6.x version)](../v1.6/README.md) - ---- - -- [GitHub项目地址](https://github.com/cookeem/kubeadm-ha/) -- [OSChina项目地址](https://git.oschina.net/cookeem/kubeadm-ha/) - ---- - -- 该指引适用于v1.6.x版本的kubernetes集群 - -### 目录 - -1. [部署架构](#部署架构) - 1. [概要部署架构](#概要部署架构) - 1. [详细部署架构](#详细部署架构) - 1. [主机节点清单](#主机节点清单) -1. [安装前准备](#安装前准备) - 1. [版本信息](#版本信息) - 1. [所需docker镜像](#所需docker镜像) - 1. [系统设置](#系统设置) -1. [kubernetes安装](#kubernetes安装) - 1. [kubernetes相关服务安装](#kubernetes相关服务安装) - 1. [docker镜像导入](#docker镜像导入) -1. [第一台master初始化](#第一台master初始化) - 1. [独立etcd集群部署](#独立etcd集群部署) - 1. [kubeadm初始化](#kubeadm初始化) - 1. [flannel网络组件安装](#flannel网络组件安装) - 1. [dashboard组件安装](#dashboard组件安装) - 1. [heapster组件安装](#heapster组件安装) -1. [master集群高可用设置](#master集群高可用设置) - 1. [复制配置](#复制配置) - 1. [创建证书](#创建证书) - 1. [修改配置](#修改配置) - 1. [验证高可用安装](#验证高可用安装) - 1. [keepalived安装配置](#keepalived安装配置) - 1. [nginx负载均衡配置](#nginx负载均衡配置) - 1. [kube-proxy配置](#kube-proxy配置) - 1. [验证master集群高可用](#验证master集群高可用) -1. [node节点加入高可用集群设置](#node节点加入高可用集群设置) - 1. [kubeadm加入高可用集群](#kubeadm加入高可用集群) - 1. [部署应用验证集群](#部署应用验证集群) - - -### 部署架构 - -#### 概要部署架构 - -![ha logo](../images/ha.png) - -* kubernetes高可用的核心架构是master的高可用,kubectl、客户端以及nodes访问load balancer实现高可用。 - ---- -[返回目录](#目录) - -#### 详细部署架构 - -![k8s ha](../images/k8s-ha.png) - -* kubernetes组件说明 - -> kube-apiserver:集群核心,集群API接口、集群各个组件通信的中枢;集群安全控制; - -> etcd:集群的数据中心,用于存放集群的配置以及状态信息,非常重要,如果数据丢失那么集群将无法恢复;因此高可用集群部署首先就是etcd是高可用集群; - -> kube-scheduler:集群Pod的调度中心;默认kubeadm安装情况下--leader-elect参数已经设置为true,保证master集群中只有一个kube-scheduler处于活跃状态; - -> kube-controller-manager:集群状态管理器,当集群状态与期望不同时,kcm会努力让集群恢复期望状态,比如:当一个pod死掉,kcm会努力新建一个pod来恢复对应replicas set期望的状态;默认kubeadm安装情况下--leader-elect参数已经设置为true,保证master集群中只有一个kube-controller-manager处于活跃状态; - -> kubelet: kubernetes node agent,负责与node上的docker engine打交道; - -> kube-proxy: 每个node上一个,负责service vip到endpoint pod的流量转发,当前主要通过设置iptables规则实现。 - -* 负载均衡 - -> keepalived集群设置一个虚拟ip地址,虚拟ip地址指向k8s-master1、k8s-master2、k8s-master3。 - -> nginx用于k8s-master1、k8s-master2、k8s-master3的apiserver的负载均衡。外部kubectl以及nodes访问apiserver的时候就可以用过keepalived的虚拟ip(192.168.60.80)以及nginx端口(8443)访问master集群的apiserver。 - ---- -[返回目录](#目录) - -#### 主机节点清单 - - 主机名 | IP地址 | 说明 | 组件 - :--- | :--- | :--- | :--- - k8s-master1 | 192.168.60.71 | master节点1 | keepalived、nginx、etcd、kubelet、kube-apiserver、kube-scheduler、kube-proxy、kube-dashboard、heapster - k8s-master2 | 192.168.60.72 | master节点2 | keepalived、nginx、etcd、kubelet、kube-apiserver、kube-scheduler、kube-proxy、kube-dashboard、heapster - k8s-master3 | 192.168.60.73 | master节点3 | keepalived、nginx、etcd、kubelet、kube-apiserver、kube-scheduler、kube-proxy、kube-dashboard、heapster - 无 | 192.168.60.80 | keepalived虚拟IP | 无 - k8s-node1 ~ 8 | 192.168.60.81 ~ 88 | 8个node节点 | kubelet、kube-proxy - ---- -[返回目录](#目录) - -### 安装前准备 - -#### 版本信息 - -* Linux版本:CentOS 7.3.1611 - -``` -cat /etc/redhat-release -CentOS Linux release 7.3.1611 (Core) -``` - -* docker版本:1.12.6 - -``` -$ docker version -Client: - Version: 1.12.6 - API version: 1.24 - Go version: go1.6.4 - Git commit: 78d1802 - Built: Tue Jan 10 20:20:01 2017 - OS/Arch: linux/amd64 - -Server: - Version: 1.12.6 - API version: 1.24 - Go version: go1.6.4 - Git commit: 78d1802 - Built: Tue Jan 10 20:20:01 2017 - OS/Arch: linux/amd64 -``` - -* kubeadm版本:v1.6.4 - -``` -$ kubeadm version -kubeadm version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:33:17Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"} -``` - -* kubelet版本:v1.6.4 - -``` -$ kubelet --version -Kubernetes v1.6.4 -``` - ---- - -[返回目录](#目录) - -#### 所需docker镜像 - -* 国内可以使用daocloud加速器下载相关镜像,然后通过docker save、docker load把本地下载的镜像放到kubernetes集群的所在机器上,daocloud加速器链接如下: - -[https://www.daocloud.io/mirror#accelerator-doc](https://www.daocloud.io/mirror#accelerator-doc) - -* 在本机MacOSX上pull相关docker镜像 - -``` -$ docker pull gcr.io/google_containers/kube-apiserver-amd64:v1.6.4 -$ docker pull gcr.io/google_containers/kube-proxy-amd64:v1.6.4 -$ docker pull gcr.io/google_containers/kube-controller-manager-amd64:v1.6.4 -$ docker pull gcr.io/google_containers/kube-scheduler-amd64:v1.6.4 -$ docker pull gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.1 -$ docker pull quay.io/coreos/flannel:v0.7.1-amd64 -$ docker pull gcr.io/google_containers/heapster-amd64:v1.3.0 -$ docker pull gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.1 -$ docker pull gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.1 -$ docker pull gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.1 -$ docker pull gcr.io/google_containers/etcd-amd64:3.0.17 -$ docker pull gcr.io/google_containers/heapster-grafana-amd64:v4.0.2 -$ docker pull gcr.io/google_containers/heapster-influxdb-amd64:v1.1.1 -$ docker pull nginx:latest -$ docker pull gcr.io/google_containers/pause-amd64:3.0 -``` - -* 在本机MacOSX上获取代码,并进入代码目录 - -``` -$ git clone https://github.com/cookeem/kubeadm-ha -$ cd kubeadm-ha -``` - -* 在本机MacOSX上把相关docker镜像保存成文件 - -``` -$ mkdir -p docker-images -$ docker save -o docker-images/kube-apiserver-amd64 gcr.io/google_containers/kube-apiserver-amd64:v1.6.4 -$ docker save -o docker-images/kube-proxy-amd64 gcr.io/google_containers/kube-proxy-amd64:v1.6.4 -$ docker save -o docker-images/kube-controller-manager-amd64 gcr.io/google_containers/kube-controller-manager-amd64:v1.6.4 -$ docker save -o docker-images/kube-scheduler-amd64 gcr.io/google_containers/kube-scheduler-amd64:v1.6.4 -$ docker save -o docker-images/kubernetes-dashboard-amd64 gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.1 -$ docker save -o docker-images/flannel quay.io/coreos/flannel:v0.7.1-amd64 -$ docker save -o docker-images/heapster-amd64 gcr.io/google_containers/heapster-amd64:v1.3.0 -$ docker save -o docker-images/k8s-dns-sidecar-amd64 gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.1 -$ docker save -o docker-images/k8s-dns-kube-dns-amd64 gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.1 -$ docker save -o docker-images/k8s-dns-dnsmasq-nanny-amd64 gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.1 -$ docker save -o docker-images/etcd-amd64 gcr.io/google_containers/etcd-amd64:3.0.17 -$ docker save -o docker-images/heapster-grafana-amd64 gcr.io/google_containers/heapster-grafana-amd64:v4.0.2 -$ docker save -o docker-images/heapster-influxdb-amd64 gcr.io/google_containers/heapster-influxdb-amd64:v1.1.1 -$ docker save -o docker-images/pause-amd64 gcr.io/google_containers/pause-amd64:3.0 -$ docker save -o docker-images/nginx nginx:latest -``` - -* 在本机MacOSX上把代码以及docker镜像复制到所有节点上 - -``` -$ scp -r * root@k8s-master1:/root/kubeadm-ha -$ scp -r * root@k8s-master2:/root/kubeadm-ha -$ scp -r * root@k8s-master3:/root/kubeadm-ha -$ scp -r * root@k8s-node1:/root/kubeadm-ha -$ scp -r * root@k8s-node2:/root/kubeadm-ha -$ scp -r * root@k8s-node3:/root/kubeadm-ha -$ scp -r * root@k8s-node4:/root/kubeadm-ha -$ scp -r * root@k8s-node5:/root/kubeadm-ha -$ scp -r * root@k8s-node6:/root/kubeadm-ha -$ scp -r * root@k8s-node7:/root/kubeadm-ha -$ scp -r * root@k8s-node8:/root/kubeadm-ha -``` - ---- -[返回目录](#目录) - -#### 系统设置 - -* 以下在kubernetes所有节点上都是使用root用户进行操作 - -* 在kubernetes所有节点上增加kubernetes仓库 - -``` -$ cat < /etc/yum.repos.d/kubernetes.repo -[kubernetes] -name=Kubernetes -baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 -enabled=1 -gpgcheck=1 -repo_gpgcheck=1 -gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg - https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg -EOF -``` - -* 在kubernetes所有节点上进行系统更新 - -``` -$ yum update -y -``` - -* 在kubernetes所有节点上关闭防火墙 - -``` -$ systemctl disable firewalld && systemctl stop firewalld && systemctl status firewalld -``` - -* 在kubernetes所有节点上设置SELINUX为permissive模式 - -``` -$ vi /etc/selinux/config -SELINUX=permissive -``` - -* 在kubernetes所有节点上设置iptables参数,否则kubeadm init会提示错误 - -``` -$ vi /etc/sysctl.d/k8s.conf -net.bridge.bridge-nf-call-iptables = 1 -net.bridge.bridge-nf-call-ip6tables = 1 -``` - -* 在kubernetes所有节点上重启主机 - -``` -$ reboot -``` - ---- -[返回目录](#目录) - -### kubernetes安装 - -#### kubernetes相关服务安装 - -* 在kubernetes所有节点上验证SELINUX模式,必须保证SELINUX为permissive模式,否则kubernetes启动会出现各种异常 - -``` -$ getenforce -Permissive -``` - -* 在kubernetes所有节点上安装并启动kubernetes - -``` -$ yum search docker --showduplicates -$ yum install docker-1.12.6-16.el7.centos.x86_64 - -$ yum search kubelet --showduplicates -$ yum install kubelet-1.6.4-0.x86_64 - -$ yum search kubeadm --showduplicates -$ yum install kubeadm-1.6.4-0.x86_64 - -$ yum search kubernetes-cni --showduplicates -$ yum install kubernetes-cni-0.5.1-0.x86_64 - -$ systemctl enable docker && systemctl start docker -$ systemctl enable kubelet && systemctl start kubelet -``` - ---- -[返回目录](#目录) - -#### docker镜像导入 - -* 在kubernetes所有节点上导入docker镜像 - -``` -$ docker load -i /root/kubeadm-ha/docker-images/etcd-amd64 -$ docker load -i /root/kubeadm-ha/docker-images/flannel -$ docker load -i /root/kubeadm-ha/docker-images/heapster-amd64 -$ docker load -i /root/kubeadm-ha/docker-images/heapster-grafana-amd64 -$ docker load -i /root/kubeadm-ha/docker-images/heapster-influxdb-amd64 -$ docker load -i /root/kubeadm-ha/docker-images/k8s-dns-dnsmasq-nanny-amd64 -$ docker load -i /root/kubeadm-ha/docker-images/k8s-dns-kube-dns-amd64 -$ docker load -i /root/kubeadm-ha/docker-images/k8s-dns-sidecar-amd64 -$ docker load -i /root/kubeadm-ha/docker-images/kube-apiserver-amd64 -$ docker load -i /root/kubeadm-ha/docker-images/kube-controller-manager-amd64 -$ docker load -i /root/kubeadm-ha/docker-images/kube-proxy-amd64 -$ docker load -i /root/kubeadm-ha/docker-images/kubernetes-dashboard-amd64 -$ docker load -i /root/kubeadm-ha/docker-images/kube-scheduler-amd64 -$ docker load -i /root/kubeadm-ha/docker-images/pause-amd64 -$ docker load -i /root/kubeadm-ha/docker-images/nginx - -$ docker images -REPOSITORY TAG IMAGE ID CREATED SIZE -gcr.io/google_containers/kube-apiserver-amd64 v1.6.4 4e3810a19a64 5 weeks ago 150.6 MB -gcr.io/google_containers/kube-proxy-amd64 v1.6.4 e073a55c288b 5 weeks ago 109.2 MB -gcr.io/google_containers/kube-controller-manager-amd64 v1.6.4 0ea16a85ac34 5 weeks ago 132.8 MB -gcr.io/google_containers/kube-scheduler-amd64 v1.6.4 1fab9be555e1 5 weeks ago 76.75 MB -gcr.io/google_containers/kubernetes-dashboard-amd64 v1.6.1 71dfe833ce74 6 weeks ago 134.4 MB -quay.io/coreos/flannel v0.7.1-amd64 cd4ae0be5e1b 10 weeks ago 77.76 MB -gcr.io/google_containers/heapster-amd64 v1.3.0 f9d33bedfed3 3 months ago 68.11 MB -gcr.io/google_containers/k8s-dns-sidecar-amd64 1.14.1 fc5e302d8309 4 months ago 44.52 MB -gcr.io/google_containers/k8s-dns-kube-dns-amd64 1.14.1 f8363dbf447b 4 months ago 52.36 MB -gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64 1.14.1 1091847716ec 4 months ago 44.84 MB -gcr.io/google_containers/etcd-amd64 3.0.17 243830dae7dd 4 months ago 168.9 MB -gcr.io/google_containers/heapster-grafana-amd64 v4.0.2 a1956d2a1a16 5 months ago 131.5 MB -gcr.io/google_containers/heapster-influxdb-amd64 v1.1.1 d3fccbedd180 5 months ago 11.59 MB -nginx latest 01f818af747d 6 months ago 181.6 MB -gcr.io/google_containers/pause-amd64 3.0 99e59f495ffa 14 months ago 746.9 kB -``` - ---- -[返回目录](#目录) - -### 第一台master初始化 - -#### 独立etcd集群部署 - -* 在k8s-master1节点上以docker方式启动etcd集群 - -``` -$ docker stop etcd && docker rm etcd -$ rm -rf /var/lib/etcd-cluster -$ mkdir -p /var/lib/etcd-cluster -$ docker run -d \ ---restart always \ --v /etc/ssl/certs:/etc/ssl/certs \ --v /var/lib/etcd-cluster:/var/lib/etcd \ --p 4001:4001 \ --p 2380:2380 \ --p 2379:2379 \ ---name etcd \ -gcr.io/google_containers/etcd-amd64:3.0.17 \ -etcd --name=etcd0 \ ---advertise-client-urls=http://192.168.60.71:2379,http://192.168.60.71:4001 \ ---listen-client-urls=http://0.0.0.0:2379,http://0.0.0.0:4001 \ ---initial-advertise-peer-urls=http://192.168.60.71:2380 \ ---listen-peer-urls=http://0.0.0.0:2380 \ ---initial-cluster-token=9477af68bbee1b9ae037d6fd9e7efefd \ ---initial-cluster=etcd0=http://192.168.60.71:2380,etcd1=http://192.168.60.72:2380,etcd2=http://192.168.60.73:2380 \ ---initial-cluster-state=new \ ---auto-tls \ ---peer-auto-tls \ ---data-dir=/var/lib/etcd -``` - -* 在k8s-master2节点上以docker方式启动etcd集群 - -``` -$ docker stop etcd && docker rm etcd -$ rm -rf /var/lib/etcd-cluster -$ mkdir -p /var/lib/etcd-cluster -$ docker run -d \ ---restart always \ --v /etc/ssl/certs:/etc/ssl/certs \ --v /var/lib/etcd-cluster:/var/lib/etcd \ --p 4001:4001 \ --p 2380:2380 \ --p 2379:2379 \ ---name etcd \ -gcr.io/google_containers/etcd-amd64:3.0.17 \ -etcd --name=etcd1 \ ---advertise-client-urls=http://192.168.60.72:2379,http://192.168.60.72:4001 \ ---listen-client-urls=http://0.0.0.0:2379,http://0.0.0.0:4001 \ ---initial-advertise-peer-urls=http://192.168.60.72:2380 \ ---listen-peer-urls=http://0.0.0.0:2380 \ ---initial-cluster-token=9477af68bbee1b9ae037d6fd9e7efefd \ ---initial-cluster=etcd0=http://192.168.60.71:2380,etcd1=http://192.168.60.72:2380,etcd2=http://192.168.60.73:2380 \ ---initial-cluster-state=new \ ---auto-tls \ ---peer-auto-tls \ ---data-dir=/var/lib/etcd -``` - -* 在k8s-master3节点上以docker方式启动etcd集群 - -``` -$ docker stop etcd && docker rm etcd -$ rm -rf /var/lib/etcd-cluster -$ mkdir -p /var/lib/etcd-cluster -$ docker run -d \ ---restart always \ --v /etc/ssl/certs:/etc/ssl/certs \ --v /var/lib/etcd-cluster:/var/lib/etcd \ --p 4001:4001 \ --p 2380:2380 \ --p 2379:2379 \ ---name etcd \ -gcr.io/google_containers/etcd-amd64:3.0.17 \ -etcd --name=etcd2 \ ---advertise-client-urls=http://192.168.60.73:2379,http://192.168.60.73:4001 \ ---listen-client-urls=http://0.0.0.0:2379,http://0.0.0.0:4001 \ ---initial-advertise-peer-urls=http://192.168.60.73:2380 \ ---listen-peer-urls=http://0.0.0.0:2380 \ ---initial-cluster-token=9477af68bbee1b9ae037d6fd9e7efefd \ ---initial-cluster=etcd0=http://192.168.60.71:2380,etcd1=http://192.168.60.72:2380,etcd2=http://192.168.60.73:2380 \ ---initial-cluster-state=new \ ---auto-tls \ ---peer-auto-tls \ ---data-dir=/var/lib/etcd -``` - -* 在k8s-master1、k8s-master2、k8s-master3上检查etcd启动状态 - -``` -$ docker exec -ti etcd ash - -$ etcdctl member list -1a32c2d3f1abcad0: name=etcd2 peerURLs=http://192.168.60.73:2380 clientURLs=http://192.168.60.73:2379,http://192.168.60.73:4001 isLeader=false -1da4f4e8b839cb79: name=etcd1 peerURLs=http://192.168.60.72:2380 clientURLs=http://192.168.60.72:2379,http://192.168.60.72:4001 isLeader=false -4238bcb92d7f2617: name=etcd0 peerURLs=http://192.168.60.71:2380 clientURLs=http://192.168.60.71:2379,http://192.168.60.71:4001 isLeader=true - -$ etcdctl cluster-health -member 1a32c2d3f1abcad0 is healthy: got healthy result from http://192.168.60.73:2379 -member 1da4f4e8b839cb79 is healthy: got healthy result from http://192.168.60.72:2379 -member 4238bcb92d7f2617 is healthy: got healthy result from http://192.168.60.71:2379 -cluster is healthy - -$ exit -``` - ---- -[返回目录](#目录) - -#### kubeadm初始化 - -* 在k8s-master1上修改kubeadm-init-v1.6.x.yaml文件,设置etcd.endpoints的${HOST_IP}为k8s-master1、k8s-master2、k8s-master3的IP地址 - -``` -$ vi /root/kubeadm-ha/kubeadm-init-v1.6.x.yaml -apiVersion: kubeadm.k8s.io/v1alpha1 -kind: MasterConfiguration -kubernetesVersion: v1.6.4 -networking: - podSubnet: 10.244.0.0/16 -etcd: - endpoints: - - http://192.168.60.71:2379 - - http://192.168.60.72:2379 - - http://192.168.60.73:2379 -``` - -* 如果使用kubeadm初始化集群,启动过程可能会卡在以下位置,那么可能是因为cgroup-driver参数与docker的不一致引起 -* [apiclient] Created API client, waiting for the control plane to become ready -* journalctl -t kubelet -S '2017-06-08'查看日志,发现如下错误 -* error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "systemd" -* 需要修改KUBELET_CGROUP_ARGS=--cgroup-driver=systemd为KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs - -``` -$ vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf -#Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd" -Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs" - -$ systemctl daemon-reload && systemctl restart kubelet -``` - -* 在k8s-master1上使用kubeadm初始化kubernetes集群,连接外部etcd集群 - -``` -$ kubeadm init --config=/root/kubeadm-ha/kubeadm-init-v1.6.x.yaml -``` - -* 在k8s-master1上设置kubectl的环境变量KUBECONFIG,连接kubelet - -``` -$ vi ~/.bashrc -export KUBECONFIG=/etc/kubernetes/admin.conf - -$ source ~/.bashrc -``` - ---- -[返回目录](#目录) - -#### flannel网络组件安装 - -* 在k8s-master1上安装flannel pod网络组件,必须安装网络组件,否则kube-dns pod会一直处于ContainerCreating - -``` -$ kubectl create -f /root/kubeadm-ha/kube-flannel -clusterrole "flannel" created -clusterrolebinding "flannel" created -serviceaccount "flannel" created -configmap "kube-flannel-cfg" created -daemonset "kube-flannel-ds" created -``` - -* 在k8s-master1上验证kube-dns成功启动,大概等待3分钟,验证所有pods的状态为Running - -``` -$ kubectl get pods --all-namespaces -o wide -NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE -kube-system kube-apiserver-k8s-master1 1/1 Running 0 3m 192.168.60.71 k8s-master1 -kube-system kube-controller-manager-k8s-master1 1/1 Running 0 3m 192.168.60.71 k8s-master1 -kube-system kube-dns-3913472980-k9mt6 3/3 Running 0 4m 10.244.0.104 k8s-master1 -kube-system kube-flannel-ds-3hhjd 2/2 Running 0 1m 192.168.60.71 k8s-master1 -kube-system kube-proxy-rzq3t 1/1 Running 0 4m 192.168.60.71 k8s-master1 -kube-system kube-scheduler-k8s-master1 1/1 Running 0 3m 192.168.60.71 k8s-master1 -``` - ---- -[返回目录](#目录) - -#### dashboard组件安装 - -* 在k8s-master1上安装dashboard组件 - -``` -$ kubectl create -f /root/kubeadm-ha/kube-dashboard/ -serviceaccount "kubernetes-dashboard" created -clusterrolebinding "kubernetes-dashboard" created -deployment "kubernetes-dashboard" created -service "kubernetes-dashboard" created -``` - -* 在k8s-master1上启动proxy,映射地址到0.0.0.0 - -``` -$ kubectl proxy --address='0.0.0.0' & -``` - -* 在本机MacOSX上访问dashboard地址,验证dashboard成功启动 - -``` -http://k8s-master1:30000 -``` - -![dashboard](images/dashboard.png) - ---- -[返回目录](#目录) - -#### heapster组件安装 - -* 在k8s-master1上允许在master上部署pod,否则heapster会无法部署 - -``` -$ kubectl taint nodes --all node-role.kubernetes.io/master- -node "k8s-master1" tainted -``` - -* 在k8s-master1上安装heapster组件,监控性能 - -``` -$ kubectl create -f /root/kubeadm-ha/kube-heapster -``` - -* 在k8s-master1上重启docker以及kubelet服务,让heapster在dashboard上生效显示 - -``` -$ systemctl restart docker kubelet -``` - -* 在k8s-master上检查pods状态 - -``` -$ kubectl get all --all-namespaces -o wide -NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE -kube-system heapster-783524908-kn6jd 1/1 Running 1 9m 10.244.0.111 k8s-master1 -kube-system kube-apiserver-k8s-master1 1/1 Running 1 15m 192.168.60.71 k8s-master1 -kube-system kube-controller-manager-k8s-master1 1/1 Running 1 15m 192.168.60.71 k8s-master1 -kube-system kube-dns-3913472980-k9mt6 3/3 Running 3 16m 10.244.0.110 k8s-master1 -kube-system kube-flannel-ds-3hhjd 2/2 Running 3 13m 192.168.60.71 k8s-master1 -kube-system kube-proxy-rzq3t 1/1 Running 1 16m 192.168.60.71 k8s-master1 -kube-system kube-scheduler-k8s-master1 1/1 Running 1 15m 192.168.60.71 k8s-master1 -kube-system kubernetes-dashboard-2039414953-d46vw 1/1 Running 1 11m 10.244.0.109 k8s-master1 -kube-system monitoring-grafana-3975459543-8l94z 1/1 Running 1 9m 10.244.0.112 k8s-master1 -kube-system monitoring-influxdb-3480804314-72ltf 1/1 Running 1 9m 10.244.0.113 k8s-master1 -``` - -* 在本机MacOSX上访问dashboard地址,验证heapster成功启动,查看Pods的CPU以及Memory信息是否正常呈现 - -``` -http://k8s-master1:30000 -``` - -![heapster](images/heapster.png) - -* 至此,第一台master成功安装,并已经完成flannel、dashboard、heapster的部署 - ---- -[返回目录](#目录) - -### master集群高可用设置 - -#### 复制配置 - -* 在k8s-master1上把/etc/kubernetes/复制到k8s-master2、k8s-master3 - -``` -scp -r /etc/kubernetes/ k8s-master2:/etc/ -scp -r /etc/kubernetes/ k8s-master3:/etc/ -``` - -* 在k8s-master2、k8s-master3上重启kubelet服务,并检查kubelet服务状态为active (running) - -``` -$ systemctl daemon-reload && systemctl restart kubelet - -$ systemctl status kubelet -● kubelet.service - kubelet: The Kubernetes Node Agent - Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled) - Drop-In: /etc/systemd/system/kubelet.service.d - └─10-kubeadm.conf - Active: active (running) since Tue 2017-06-27 16:24:22 CST; 1 day 17h ago - Docs: http://kubernetes.io/docs/ - Main PID: 2780 (kubelet) - Memory: 92.9M - CGroup: /system.slice/kubelet.service - ├─2780 /usr/bin/kubelet --kubeconfig=/etc/kubernetes/kubelet.conf --require-... - └─2811 journalctl -k -f -``` - -* 在k8s-master2、k8s-master3上设置kubectl的环境变量KUBECONFIG,连接kubelet - -``` -$ vi ~/.bashrc -export KUBECONFIG=/etc/kubernetes/admin.conf - -$ source ~/.bashrc -``` - -* 在k8s-master2、k8s-master3检测节点状态,发现节点已经加进来 - -``` -$ kubectl get nodes -o wide -NAME STATUS AGE VERSION EXTERNAL-IP OS-IMAGE KERNEL-VERSION -k8s-master1 Ready 26m v1.6.4 CentOS Linux 7 (Core) 3.10.0-514.6.1.el7.x86_64 -k8s-master2 Ready 2m v1.6.4 CentOS Linux 7 (Core) 3.10.0-514.21.1.el7.x86_64 -k8s-master3 Ready 2m v1.6.4 CentOS Linux 7 (Core) 3.10.0-514.21.1.el7.x86_64 -``` - -* 在k8s-master2、k8s-master3上修改kube-apiserver.yaml的配置,${HOST_IP}改为本机IP - -``` -$ vi /etc/kubernetes/manifests/kube-apiserver.yaml - - --advertise-address=${HOST_IP} -``` - -* 在k8s-master2和k8s-master3上的修改kubelet.conf设置,${HOST_IP}改为本机IP - -``` -$ vi /etc/kubernetes/kubelet.conf -server: https://${HOST_IP}:6443 -``` - -* 在k8s-master2和k8s-master3上的重启服务 - -``` -$ systemctl daemon-reload && systemctl restart docker kubelet -``` - ---- -[返回目录](#目录) - -#### 创建证书 - -* 在k8s-master2和k8s-master3上修改kubelet.conf后,由于kubelet.conf配置的crt和key与本机IP地址不一致的情况,kubelet服务会异常退出,crt和key必须重新制作。查看apiserver.crt的签名信息,发现IP Address以及DNS绑定了k8s-master1,必须进行相应修改。 - -``` -openssl x509 -noout -text -in /etc/kubernetes/pki/apiserver.crt -Certificate: - Data: - Version: 3 (0x2) - Serial Number: 9486057293403496063 (0x83a53ed95c519e7f) - Signature Algorithm: sha1WithRSAEncryption - Issuer: CN=kubernetes - Validity - Not Before: Jun 22 16:22:44 2017 GMT - Not After : Jun 22 16:22:44 2018 GMT - Subject: CN=kube-apiserver, - Subject Public Key Info: - Public Key Algorithm: rsaEncryption - Public-Key: (2048 bit) - Modulus: - d0:10:4a:3b:c4:62:5d:ae:f8:f1:16:48:b3:77:6b: - 53:4b - Exponent: 65537 (0x10001) - X509v3 extensions: - X509v3 Subject Alternative Name: - DNS:k8s-master1, DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local, IP Address:10.96.0.1, IP Address:192.168.60.71 - Signature Algorithm: sha1WithRSAEncryption - dd:68:16:f9:11:be:c3:3c:be:89:9f:14:60:6b:e0:47:c7:91: - 9e:78:ab:ce -``` - -* 在k8s-master1、k8s-master2、k8s-master3上使用ca.key和ca.crt制作apiserver.crt和apiserver.key - -``` -$ mkdir -p /etc/kubernetes/pki-local - -$ cd /etc/kubernetes/pki-local -``` - -* 在k8s-master1、k8s-master2、k8s-master3上生成2048位的密钥对 - -``` -$ openssl genrsa -out apiserver.key 2048 -``` - -* 在k8s-master1、k8s-master2、k8s-master3上生成证书签署请求文件 - -``` -$ openssl req -new -key apiserver.key -subj "/CN=kube-apiserver," -out apiserver.csr -``` - -* 在k8s-master1、k8s-master2、k8s-master3上编辑apiserver.ext文件,${HOST_NAME}修改为本机主机名,${HOST_IP}修改为本机IP地址,${VIRTUAL_IP}修改为keepalived的虚拟IP(192.168.60.80) - -``` -$ vi apiserver.ext -subjectAltName = DNS:${HOST_NAME},DNS:kubernetes,DNS:kubernetes.default,DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local, IP:10.96.0.1, IP:${HOST_IP}, IP:${VIRTUAL_IP} -``` - -* 在k8s-master1、k8s-master2、k8s-master3上使用ca.key和ca.crt签署上述请求 - -``` -$ openssl x509 -req -in apiserver.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out apiserver.crt -days 365 -extfile /etc/kubernetes/pki-local/apiserver.ext -``` - -* 在k8s-master1、k8s-master2、k8s-master3上查看新生成的证书: - -``` -$ openssl x509 -noout -text -in apiserver.crt -Certificate: - Data: - Version: 3 (0x2) - Serial Number: 9486057293403496063 (0x83a53ed95c519e7f) - Signature Algorithm: sha1WithRSAEncryption - Issuer: CN=kubernetes - Validity - Not Before: Jun 22 16:22:44 2017 GMT - Not After : Jun 22 16:22:44 2018 GMT - Subject: CN=kube-apiserver, - Subject Public Key Info: - Public Key Algorithm: rsaEncryption - Public-Key: (2048 bit) - Modulus: - d0:10:4a:3b:c4:62:5d:ae:f8:f1:16:48:b3:77:6b: - 53:4b - Exponent: 65537 (0x10001) - X509v3 extensions: - X509v3 Subject Alternative Name: - DNS:k8s-master3, DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local, IP Address:10.96.0.1, IP Address:192.168.60.73, IP Address:192.168.60.80 - Signature Algorithm: sha1WithRSAEncryption - dd:68:16:f9:11:be:c3:3c:be:89:9f:14:60:6b:e0:47:c7:91: - 9e:78:ab:ce -``` - -* 在k8s-master1、k8s-master2、k8s-master3上把apiserver.crt和apiserver.key文件复制到/etc/kubernetes/pki目录 - -``` -$ cp apiserver.crt apiserver.key /etc/kubernetes/pki/ -``` - ---- -[返回目录](#目录) - -#### 修改配置 - -* 在k8s-master2和k8s-master3上修改admin.conf,${HOST_IP}修改为本机IP地址 - -``` -$ vi /etc/kubernetes/admin.conf - server: https://${HOST_IP}:6443 -``` - -* 在k8s-master2和k8s-master3上修改controller-manager.conf,${HOST_IP}修改为本机IP地址 - -``` -$ vi /etc/kubernetes/controller-manager.conf - server: https://${HOST_IP}:6443 -``` - -* 在k8s-master2和k8s-master3上修改scheduler.conf,${HOST_IP}修改为本机IP地址 - -``` -$ vi /etc/kubernetes/scheduler.conf - server: https://${HOST_IP}:6443 -``` - -* 在k8s-master1、k8s-master2、k8s-master3上重启所有服务 - -``` -$ systemctl daemon-reload && systemctl restart docker kubelet -``` - ---- -[返回目录](#目录) - -#### 验证高可用安装 - -* 在k8s-master1、k8s-master2、k8s-master3任意节点上检测服务启动情况,发现apiserver、controller-manager、kube-scheduler、proxy、flannel已经在k8s-master1、k8s-master2、k8s-master3成功启动 - -``` -$ kubectl get pod --all-namespaces -o wide | grep k8s-master2 -kube-system kube-apiserver-k8s-master2 1/1 Running 1 55s 192.168.60.72 k8s-master2 -kube-system kube-controller-manager-k8s-master2 1/1 Running 2 18m 192.168.60.72 k8s-master2 -kube-system kube-flannel-ds-t8gkh 2/2 Running 4 18m 192.168.60.72 k8s-master2 -kube-system kube-proxy-bpgqw 1/1 Running 1 18m 192.168.60.72 k8s-master2 -kube-system kube-scheduler-k8s-master2 1/1 Running 2 18m 192.168.60.72 k8s-master2 - -$ kubectl get pod --all-namespaces -o wide | grep k8s-master3 -kube-system kube-apiserver-k8s-master3 1/1 Running 1 1m 192.168.60.73 k8s-master3 -kube-system kube-controller-manager-k8s-master3 1/1 Running 2 18m 192.168.60.73 k8s-master3 -kube-system kube-flannel-ds-tmqmx 2/2 Running 4 18m 192.168.60.73 k8s-master3 -kube-system kube-proxy-4stg3 1/1 Running 1 18m 192.168.60.73 k8s-master3 -kube-system kube-scheduler-k8s-master3 1/1 Running 2 18m 192.168.60.73 k8s-master3 -``` - -* 在k8s-master1、k8s-master2、k8s-master3任意节点上通过kubectl logs检查各个controller-manager和scheduler的leader election结果,可以发现只有一个节点有效表示选举正常 - -``` -$ kubectl logs -n kube-system kube-controller-manager-k8s-master1 -$ kubectl logs -n kube-system kube-controller-manager-k8s-master2 -$ kubectl logs -n kube-system kube-controller-manager-k8s-master3 - -$ kubectl logs -n kube-system kube-scheduler-k8s-master1 -$ kubectl logs -n kube-system kube-scheduler-k8s-master2 -$ kubectl logs -n kube-system kube-scheduler-k8s-master3 -``` - -* 在k8s-master1、k8s-master2、k8s-master3任意节点上查看deployment的情况 - -``` -$ kubectl get deploy --all-namespaces -NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE -kube-system heapster 1 1 1 1 41m -kube-system kube-dns 1 1 1 1 48m -kube-system kubernetes-dashboard 1 1 1 1 43m -kube-system monitoring-grafana 1 1 1 1 41m -kube-system monitoring-influxdb 1 1 1 1 41m -``` - -* 在k8s-master1、k8s-master2、k8s-master3任意节点上把kubernetes-dashboard、kube-dns、 scale up成replicas=3,保证各个master节点上都有运行 - -``` -$ kubectl scale --replicas=3 -n kube-system deployment/kube-dns -$ kubectl get pods --all-namespaces -o wide| grep kube-dns - -$ kubectl scale --replicas=3 -n kube-system deployment/kubernetes-dashboard -$ kubectl get pods --all-namespaces -o wide| grep kubernetes-dashboard - -$ kubectl scale --replicas=3 -n kube-system deployment/heapster -$ kubectl get pods --all-namespaces -o wide| grep heapster - -$ kubectl scale --replicas=3 -n kube-system deployment/monitoring-grafana -$ kubectl get pods --all-namespaces -o wide| grep monitoring-grafana - -$ kubectl scale --replicas=3 -n kube-system deployment/monitoring-influxdb -$ kubectl get pods --all-namespaces -o wide| grep monitoring-influxdb -``` ---- -[返回目录](#目录) - -#### keepalived安装配置 - -* 在k8s-master、k8s-master2、k8s-master3上安装keepalived - -``` -$ yum install -y keepalived - -$ systemctl enable keepalived && systemctl restart keepalived -``` - -* 在k8s-master1、k8s-master2、k8s-master3上备份keepalived配置文件 - -``` -$ mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak -``` - -* 在k8s-master1、k8s-master2、k8s-master3上设置apiserver监控脚本,当apiserver检测失败的时候关闭keepalived服务,转移虚拟IP地址 - -``` -$ vi /etc/keepalived/check_apiserver.sh -#!/bin/bash -err=0 -for k in $( seq 1 10 ) -do - check_code=$(ps -ef|grep kube-apiserver | wc -l) - if [ "$check_code" = "1" ]; then - err=$(expr $err + 1) - sleep 5 - continue - else - err=0 - break - fi -done -if [ "$err" != "0" ]; then - echo "systemctl stop keepalived" - /usr/bin/systemctl stop keepalived - exit 1 -else - exit 0 -fi - -chmod a+x /etc/keepalived/check_apiserver.sh -``` - -* 在k8s-master1、k8s-master2、k8s-master3上查看接口名字 - -``` -$ ip a | grep 192.168.60 -``` - -* 在k8s-master1、k8s-master2、k8s-master3上设置keepalived,参数说明如下: -* state ${STATE}:为MASTER或者BACKUP,只能有一个MASTER -* interface ${INTERFACE_NAME}:为本机的需要绑定的接口名字(通过上边的```ip a```命令查看) -* mcast_src_ip ${HOST_IP}:为本机的IP地址 -* priority ${PRIORITY}:为优先级,例如102、101、100,优先级越高越容易选择为MASTER,优先级不能一样 -* ${VIRTUAL_IP}:为虚拟的IP地址,这里设置为192.168.60.80 - -``` -$ vi /etc/keepalived/keepalived.conf -! Configuration File for keepalived -global_defs { - router_id LVS_DEVEL -} -vrrp_script chk_apiserver { - script "/etc/keepalived/check_apiserver.sh" - interval 2 - weight -5 - fall 3 - rise 2 -} -vrrp_instance VI_1 { - state ${STATE} - interface ${INTERFACE_NAME} - mcast_src_ip ${HOST_IP} - virtual_router_id 51 - priority ${PRIORITY} - advert_int 2 - authentication { - auth_type PASS - auth_pass 4be37dc3b4c90194d1600c483e10ad1d - } - virtual_ipaddress { - ${VIRTUAL_IP} - } - track_script { - chk_apiserver - } -} -``` - -* 在k8s-master1、k8s-master2、k8s-master3上重启keepalived服务,检测虚拟IP地址是否生效 - -``` -$ systemctl restart keepalived -$ ping 192.168.60.80 -``` - ---- -[返回目录](#目录) - -#### nginx负载均衡配置 - -* 在k8s-master1、k8s-master2、k8s-master3上修改nginx-default.conf设置,${HOST_IP}对应k8s-master1、k8s-master2、k8s-master3的地址。通过nginx把访问apiserver的6443端口负载均衡到8433端口上 - -``` -$ vi /root/kubeadm-ha/nginx-default.conf -stream { - upstream apiserver { - server ${HOST_IP}:6443 weight=5 max_fails=3 fail_timeout=30s; - server ${HOST_IP}:6443 weight=5 max_fails=3 fail_timeout=30s; - server ${HOST_IP}:6443 weight=5 max_fails=3 fail_timeout=30s; - } - - server { - listen 8443; - proxy_connect_timeout 1s; - proxy_timeout 3s; - proxy_pass apiserver; - } -} -``` - -* 在k8s-master1、k8s-master2、k8s-master3上启动nginx容器 - -``` -$ docker run -d -p 8443:8443 \ ---name nginx-lb \ ---restart always \ --v /root/kubeadm-ha/nginx-default.conf:/etc/nginx/nginx.conf \ -nginx -``` - -* 在k8s-master1、k8s-master2、k8s-master3上检测keepalived服务的虚拟IP地址指向 - -``` -$ curl -L 192.168.60.80:8443 | wc -l - % Total % Received % Xferd Average Speed Time Time Time Current - Dload Upload Total Spent Left Speed -100 14 0 14 0 0 18324 0 --:--:-- --:--:-- --:--:-- 14000 -1 -``` - -* 业务恢复后务必重启keepalived,否则keepalived会处于关闭状态 - -``` -$ systemctl restart keepalived -``` - -* 在k8s-master1、k8s-master2、k8s-master3上查看keeplived日志,有以下输出表示当前虚拟IP地址绑定的主机 - -``` -$ systemctl status keepalived -l -VRRP_Instance(VI_1) Sending gratuitous ARPs on ens160 for 192.168.60.80 -``` - ---- -[返回目录](#目录) - -#### kube-proxy配置 - -* 在k8s-master1上设置kube-proxy使用keepalived的虚拟IP地址,避免k8s-master1异常的时候所有节点的kube-proxy连接不上 - -``` -$ kubectl get -n kube-system configmap -NAME DATA AGE -extension-apiserver-authentication 6 4h -kube-flannel-cfg 2 4h -kube-proxy 1 4h -``` - -* 在k8s-master1上修改configmap/kube-proxy的server指向keepalived的虚拟IP地址 - -``` -$ kubectl edit -n kube-system configmap/kube-proxy - server: https://192.168.60.80:8443 -``` - -* 在k8s-master1上查看configmap/kube-proxy设置情况 - -``` -$ kubectl get -n kube-system configmap/kube-proxy -o yaml -``` - -* 在k8s-master1上删除所有kube-proxy的pod,让proxy重建 - -``` -kubectl get pods --all-namespaces -o wide | grep proxy -``` - -* 在k8s-master1、k8s-master2、k8s-master3上重启docker kubelet keepalived服务 - -``` -$ systemctl restart docker kubelet keepalived -``` - ---- -[返回目录](#目录) - -#### 验证master集群高可用 - -* 在k8s-master1上检查各个节点pod的启动状态,每个上都成功启动heapster、kube-apiserver、kube-controller-manager、kube-dns、kube-flannel、kube-proxy、kube-scheduler、kubernetes-dashboard、monitoring-grafana、monitoring-influxdb。并且所有pod都处于Running状态表示正常 - -``` -$ kubectl get pods --all-namespaces -o wide | grep k8s-master1 - -$ kubectl get pods --all-namespaces -o wide | grep k8s-master2 - -$ kubectl get pods --all-namespaces -o wide | grep k8s-master3 -``` - ---- -[返回目录](#目录) - -### node节点加入高可用集群设置 - -#### kubeadm加入高可用集群 -* 在k8s-master1上禁止在所有master节点上发布应用 - -``` -$ kubectl patch node k8s-master1 -p '{"spec":{"unschedulable":true}}' - -$ kubectl patch node k8s-master2 -p '{"spec":{"unschedulable":true}}' - -$ kubectl patch node k8s-master3 -p '{"spec":{"unschedulable":true}}' -``` - -* 在k8s-master1上查看集群的token - -``` -$ kubeadm token list -TOKEN TTL EXPIRES USAGES DESCRIPTION -xxxxxx.yyyyyy authentication,signing The default bootstrap token generated by 'kubeadm init' -``` - -* 在k8s-node1 ~ k8s-node8上,${TOKEN}为k8s-master1上显示的token,${VIRTUAL_IP}为keepalived的虚拟IP地址192.168.60.80 - -``` -$ kubeadm join --token ${TOKEN} ${VIRTUAL_IP}:8443 -``` - ---- -[返回目录](#目录) - -#### 部署应用验证集群 - -* 在k8s-node1 ~ k8s-node8上查看kubelet状态,kubelet状态为active (running)表示kubelet服务正常启动 - -``` -$ systemctl status kubelet -● kubelet.service - kubelet: The Kubernetes Node Agent - Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled) - Drop-In: /etc/systemd/system/kubelet.service.d - └─10-kubeadm.conf - Active: active (running) since Tue 2017-06-27 16:23:43 CST; 1 day 18h ago - Docs: http://kubernetes.io/docs/ - Main PID: 1146 (kubelet) - Memory: 204.9M - CGroup: /system.slice/kubelet.service - ├─ 1146 /usr/bin/kubelet --kubeconfig=/etc/kubernetes/kubelet.conf --require... - ├─ 2553 journalctl -k -f - ├─ 4988 /usr/sbin/glusterfs --log-level=ERROR --log-file=/var/lib/kubelet/pl... - └─14720 /usr/sbin/glusterfs --log-level=ERROR --log-file=/var/lib/kubelet/pl... -``` - -* 在k8s-master1上检查各个节点状态,发现所有k8s-nodes节点成功加入 - -``` -$ kubectl get nodes -o wide -NAME STATUS AGE VERSION -k8s-master1 Ready,SchedulingDisabled 5h v1.6.4 -k8s-master2 Ready,SchedulingDisabled 4h v1.6.4 -k8s-master3 Ready,SchedulingDisabled 4h v1.6.4 -k8s-node1 Ready 6m v1.6.4 -k8s-node2 Ready 4m v1.6.4 -k8s-node3 Ready 4m v1.6.4 -k8s-node4 Ready 3m v1.6.4 -k8s-node5 Ready 3m v1.6.4 -k8s-node6 Ready 3m v1.6.4 -k8s-node7 Ready 3m v1.6.4 -k8s-node8 Ready 3m v1.6.4 -``` - -* 在k8s-master1上测试部署nginx服务,nginx服务成功部署到k8s-node5上 - -``` -$ kubectl run nginx --image=nginx --port=80 -deployment "nginx" created - -$ kubectl get pod -o wide -l=run=nginx -NAME READY STATUS RESTARTS AGE IP NODE -nginx-2662403697-pbmwt 1/1 Running 0 5m 10.244.7.6 k8s-node5 -``` - -* 在k8s-master1让nginx服务外部可见 - -``` -$ kubectl expose deployment nginx --port=80 --target-port=80 --type=NodePort -service "nginx" exposed - -$ kubectl get svc -l=run=nginx -NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE -nginx 10.105.151.69 80:31639/TCP 43s - -$ curl k8s-master2:31639 - - - -Welcome to nginx! - - - -

Welcome to nginx!

-

If you see this page, the nginx web server is successfully installed and -working. Further configuration is required.

- -

For online documentation and support please refer to -nginx.org.
-Commercial support is available at -nginx.com.

- -

Thank you for using nginx.

- - - -``` - -* 至此,kubernetes高可用集群成功部署 ---- -[返回目录](#目录) - diff --git a/v1.6/images/dashboard.png b/v1.6/images/dashboard.png deleted file mode 100644 index f86f497..0000000 Binary files a/v1.6/images/dashboard.png and /dev/null differ diff --git a/v1.6/images/heapster.png b/v1.6/images/heapster.png deleted file mode 100644 index e7d320a..0000000 Binary files a/v1.6/images/heapster.png and /dev/null differ diff --git a/v1.6/kube-dashboard/kubernetes-dashboard-1.6.1.yaml b/v1.6/kube-dashboard/kubernetes-dashboard-1.6.1.yaml deleted file mode 100644 index ca8df50..0000000 --- a/v1.6/kube-dashboard/kubernetes-dashboard-1.6.1.yaml +++ /dev/null @@ -1,98 +0,0 @@ -# Copyright 2015 Google Inc. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# Configuration to deploy release version of the Dashboard UI compatible with -# Kubernetes 1.6 (RBAC enabled). -# -# Example usage: kubectl create -f - -apiVersion: v1 -kind: ServiceAccount -metadata: - labels: - k8s-app: kubernetes-dashboard - name: kubernetes-dashboard - namespace: kube-system ---- -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRoleBinding -metadata: - name: kubernetes-dashboard - labels: - k8s-app: kubernetes-dashboard -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: cluster-admin -subjects: -- kind: ServiceAccount - name: kubernetes-dashboard - namespace: kube-system ---- -kind: Deployment -apiVersion: extensions/v1beta1 -metadata: - labels: - k8s-app: kubernetes-dashboard - name: kubernetes-dashboard - namespace: kube-system -spec: - replicas: 1 - revisionHistoryLimit: 10 - selector: - matchLabels: - k8s-app: kubernetes-dashboard - template: - metadata: - labels: - k8s-app: kubernetes-dashboard - spec: - containers: - - name: kubernetes-dashboard - image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.1 - ports: - - containerPort: 9090 - protocol: TCP - args: - # Uncomment the following line to manually specify Kubernetes API server Host - # If not specified, Dashboard will attempt to auto discover the API server and connect - # to it. Uncomment only if the default does not work. - # - --apiserver-host=http://my-address:port - livenessProbe: - httpGet: - path: / - port: 9090 - initialDelaySeconds: 30 - timeoutSeconds: 30 - serviceAccountName: kubernetes-dashboard - # Comment the following tolerations if Dashboard must not be deployed on master - tolerations: - - key: node-role.kubernetes.io/master - effect: NoSchedule ---- -kind: Service -apiVersion: v1 -metadata: - labels: - k8s-app: kubernetes-dashboard - name: kubernetes-dashboard - namespace: kube-system -spec: - type: NodePort - ports: - - port: 80 - targetPort: 9090 - nodePort: 30000 - selector: - k8s-app: kubernetes-dashboard diff --git a/v1.6/kube-flannel/step1-kube-flannel-rbac-v0.7.1.yml b/v1.6/kube-flannel/step1-kube-flannel-rbac-v0.7.1.yml deleted file mode 100644 index d66465c..0000000 --- a/v1.6/kube-flannel/step1-kube-flannel-rbac-v0.7.1.yml +++ /dev/null @@ -1,42 +0,0 @@ -# Create the clusterrole and clusterrolebinding: -# $ kubectl create -f kube-flannel-rbac.yml -# Create the pod using the same namespace used by the flannel serviceaccount: -# $ kubectl create --namespace kube-system -f kube-flannel.yml ---- -kind: ClusterRole -apiVersion: rbac.authorization.k8s.io/v1beta1 -metadata: - name: flannel -rules: - - apiGroups: - - "" - resources: - - pods - verbs: - - get - - apiGroups: - - "" - resources: - - nodes - verbs: - - list - - watch - - apiGroups: - - "" - resources: - - nodes/status - verbs: - - patch ---- -kind: ClusterRoleBinding -apiVersion: rbac.authorization.k8s.io/v1beta1 -metadata: - name: flannel -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: flannel -subjects: -- kind: ServiceAccount - name: flannel - namespace: kube-system diff --git a/v1.6/kube-flannel/step2-kube-flannel-v0.7.1.yml b/v1.6/kube-flannel/step2-kube-flannel-v0.7.1.yml deleted file mode 100644 index 09dfe53..0000000 --- a/v1.6/kube-flannel/step2-kube-flannel-v0.7.1.yml +++ /dev/null @@ -1,93 +0,0 @@ ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: flannel - namespace: kube-system ---- -kind: ConfigMap -apiVersion: v1 -metadata: - name: kube-flannel-cfg - namespace: kube-system - labels: - tier: node - app: flannel -data: - cni-conf.json: | - { - "name": "cbr0", - "type": "flannel", - "delegate": { - "isDefaultGateway": true - } - } - net-conf.json: | - { - "Network": "10.244.0.0/16", - "Backend": { - "Type": "vxlan" - } - } ---- -apiVersion: extensions/v1beta1 -kind: DaemonSet -metadata: - name: kube-flannel-ds - namespace: kube-system - labels: - tier: node - app: flannel -spec: - template: - metadata: - labels: - tier: node - app: flannel - spec: - hostNetwork: true - nodeSelector: - beta.kubernetes.io/arch: amd64 - tolerations: - - key: node-role.kubernetes.io/master - operator: Exists - effect: NoSchedule - serviceAccountName: flannel - containers: - - name: kube-flannel - image: quay.io/coreos/flannel:v0.7.1-amd64 - command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr" ] - securityContext: - privileged: true - env: - - name: POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: POD_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - volumeMounts: - - name: run - mountPath: /run - - name: flannel-cfg - mountPath: /etc/kube-flannel/ - - name: install-cni - image: quay.io/coreos/flannel:v0.7.1-amd64 - command: [ "/bin/sh", "-c", "set -e -x; cp -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conf; while true; do sleep 3600; done" ] - volumeMounts: - - name: cni - mountPath: /etc/cni/net.d - - name: flannel-cfg - mountPath: /etc/kube-flannel/ - volumes: - - name: run - hostPath: - path: /run - - name: cni - hostPath: - path: /etc/cni/net.d - - name: flannel-cfg - configMap: - name: kube-flannel-cfg diff --git a/v1.6/kube-heapster/grafana.yaml b/v1.6/kube-heapster/grafana.yaml deleted file mode 100644 index 4bdce05..0000000 --- a/v1.6/kube-heapster/grafana.yaml +++ /dev/null @@ -1,66 +0,0 @@ -apiVersion: extensions/v1beta1 -kind: Deployment -metadata: - name: monitoring-grafana - namespace: kube-system -spec: - replicas: 1 - template: - metadata: - labels: - task: monitoring - k8s-app: grafana - spec: - containers: - - name: grafana - image: gcr.io/google_containers/heapster-grafana-amd64:v4.0.2 - ports: - - containerPort: 3000 - protocol: TCP - volumeMounts: - - mountPath: /var - name: grafana-storage - env: - - name: INFLUXDB_HOST - value: monitoring-influxdb - - name: GRAFANA_PORT - value: "3000" - # The following env variables are required to make Grafana accessible via - # the kubernetes api-server proxy. On production clusters, we recommend - # removing these env variables, setup auth for grafana, and expose the grafana - # service using a LoadBalancer or a public IP. - - name: GF_AUTH_BASIC_ENABLED - value: "false" - - name: GF_AUTH_ANONYMOUS_ENABLED - value: "true" - - name: GF_AUTH_ANONYMOUS_ORG_ROLE - value: Admin - - name: GF_SERVER_ROOT_URL - # If you're only using the API Server proxy, set this value instead: - # value: /api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/ - value: / - volumes: - - name: grafana-storage - emptyDir: {} ---- -apiVersion: v1 -kind: Service -metadata: - labels: - # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons) - # If you are NOT using this as an addon, you should comment out this line. - kubernetes.io/cluster-service: 'true' - kubernetes.io/name: monitoring-grafana - name: monitoring-grafana - namespace: kube-system -spec: - # In a production setup, we recommend accessing Grafana through an external Loadbalancer - # or through a public IP. - # type: LoadBalancer - # You could also use NodePort to expose the service at a randomly-generated port - # type: NodePort - ports: - - port: 80 - targetPort: 3000 - selector: - k8s-app: grafana diff --git a/v1.6/kube-heapster/heapster-rbac.yaml b/v1.6/kube-heapster/heapster-rbac.yaml deleted file mode 100644 index 74df610..0000000 --- a/v1.6/kube-heapster/heapster-rbac.yaml +++ /dev/null @@ -1,67 +0,0 @@ -apiVersion: v1 -kind: ServiceAccount -metadata: - name: heapster - namespace: kube-system ---- -kind: ClusterRoleBinding -apiVersion: rbac.authorization.k8s.io/v1beta1 -metadata: - name: heapster -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: system:heapster -subjects: -- kind: ServiceAccount - name: heapster - namespace: kube-system ---- -apiVersion: apps/v1beta1 -kind: Deployment -metadata: - name: heapster - labels: - k8s-app: heapster - task: monitoring - namespace: kube-system -spec: - replicas: 1 - template: - metadata: - labels: - k8s-app: heapster - task: monitoring - spec: - tolerations: - - key: beta.kubernetes.io/arch - value: arm - effect: NoSchedule - - key: beta.kubernetes.io/arch - value: arm64 - effect: NoSchedule - serviceAccountName: heapster - containers: - - name: heapster - image: gcr.io/google_containers/heapster-amd64:v1.3.0 - command: - - /heapster - - --source=kubernetes:https://kubernetes.default - - --sink=influxdb:http://monitoring-influxdb.kube-system.svc:8086 ---- -apiVersion: v1 -kind: Service -metadata: - labels: - task: monitoring - k8s-app: heapster - kubernetes.io/cluster-service: "true" - kubernetes.io/name: Heapster - name: heapster - namespace: kube-system -spec: - ports: - - port: 80 - targetPort: 8082 - selector: - k8s-app: heapster diff --git a/v1.6/kube-heapster/influxdb.yaml b/v1.6/kube-heapster/influxdb.yaml deleted file mode 100644 index 9afdf55..0000000 --- a/v1.6/kube-heapster/influxdb.yaml +++ /dev/null @@ -1,40 +0,0 @@ -apiVersion: extensions/v1beta1 -kind: Deployment -metadata: - name: monitoring-influxdb - namespace: kube-system -spec: - replicas: 1 - template: - metadata: - labels: - task: monitoring - k8s-app: influxdb - spec: - containers: - - name: influxdb - image: gcr.io/google_containers/heapster-influxdb-amd64:v1.1.1 - volumeMounts: - - mountPath: /data - name: influxdb-storage - volumes: - - name: influxdb-storage - emptyDir: {} ---- -apiVersion: v1 -kind: Service -metadata: - labels: - task: monitoring - # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons) - # If you are NOT using this as an addon, you should comment out this line. - kubernetes.io/cluster-service: 'true' - kubernetes.io/name: monitoring-influxdb - name: monitoring-influxdb - namespace: kube-system -spec: - ports: - - port: 8086 - targetPort: 8086 - selector: - k8s-app: influxdb diff --git a/v1.6/kubeadm-init-v1.6.x.yaml b/v1.6/kubeadm-init-v1.6.x.yaml deleted file mode 100644 index 69a5ad8..0000000 --- a/v1.6/kubeadm-init-v1.6.x.yaml +++ /dev/null @@ -1,10 +0,0 @@ -apiVersion: kubeadm.k8s.io/v1alpha1 -kind: MasterConfiguration -kubernetesVersion: v1.6.4 -networking: - podSubnet: 10.244.0.0/16 -etcd: - endpoints: - - http://${HOST_IP}:2379 - - http://${HOST_IP}:2379 - - http://${HOST_IP}:2379 diff --git a/v1.6/nginx-default.conf b/v1.6/nginx-default.conf deleted file mode 100644 index b330c0b..0000000 --- a/v1.6/nginx-default.conf +++ /dev/null @@ -1,47 +0,0 @@ - -user nginx; -worker_processes 1; - -error_log /var/log/nginx/error.log warn; -pid /var/run/nginx.pid; - - -events { - worker_connections 1024; -} - - -http { - include /etc/nginx/mime.types; - default_type application/octet-stream; - - log_format main '$remote_addr - $remote_user [$time_local] "$request" ' - '$status $body_bytes_sent "$http_referer" ' - '"$http_user_agent" "$http_x_forwarded_for"'; - - access_log /var/log/nginx/access.log main; - - sendfile on; - #tcp_nopush on; - - keepalive_timeout 65; - - #gzip on; - - include /etc/nginx/conf.d/*.conf; -} - -stream { - upstream apiserver { - server ${HOST_IP}:6443 weight=5 max_fails=3 fail_timeout=30s; - server ${HOST_IP}:6443 weight=5 max_fails=3 fail_timeout=30s; - server ${HOST_IP}:6443 weight=5 max_fails=3 fail_timeout=30s; - } - - server { - listen 8443; - proxy_connect_timeout 1s; - proxy_timeout 3s; - proxy_pass apiserver; - } -} diff --git a/v1.7/README.md b/v1.7/README.md deleted file mode 100644 index a8c23f1..0000000 --- a/v1.7/README.md +++ /dev/null @@ -1,1141 +0,0 @@ -# kubeadm-highavailiability - kubernetes high availiability deployment based on kubeadm, for Kubernetes version v1.11.x/v1.9.x/v1.7.x/v1.6.x - -![k8s logo](../images/Kubernetes.png) - -- [中文文档(for v1.11.x版本)](../README_CN.md) -- [English document(for v1.11.x version)](../README.md) -- [中文文档(for v1.9.x版本)](../v1.9/README_CN.md) -- [English document(for v1.9.x version)](../v1.9/README.md) -- [中文文档(for v1.7.x版本)](../v1.7/README_CN.md) -- [English document(for v1.7.x version)](../v1.7/README.md) -- [中文文档(for v1.6.x版本)](../v1.6/README_CN.md) -- [English document(for v1.6.x version)](../v1.6/README.md) - ---- - -- [GitHub project URL](https://github.com/cookeem/kubeadm-ha/) -- [OSChina project URL](https://git.oschina.net/cookeem/kubeadm-ha/) - ---- - -- This operation instruction is for version v1.7.x kubernetes cluster - -### category - -1. [deployment architecture](#deployment-architecture) - 1. [deployment architecture summary](#deployment-architecture-summary) - 1. [detail deployment architecture](#detail-deployment-architecture) - 1. [hosts list](#hosts-list) -1. [prerequisites](#prerequisites) - 1. [version info](#version-info) - 1. [required docker images](#required-docker-images) - 1. [system configuration](#system-configuration) -1. [kubernetes installation](#kubernetes-installation) - 1. [kubernetes and related services installation](#kubernetes-and-related-services-installation) - 1. [load docker images](#load-docker-images) -1. [use kubeadm to init first master](#use-kubeadm-to-init-first-master) - 1. [deploy independent etcd tls cluster](#deploy-independent-etcd-tls-cluster) - 1. [kubeadm init](#kubeadm-init) - 1. [install flannel networks addon](#install-flannel-networks-addon) - 1. [install dashboard addon](#install-dashboard-addon) - 1. [install heapster addon](#install-heapster-addon) -1. [kubernetes masters high avialiability configuration](#kubernetes-masters-high-avialiability-configuration) - 1. [copy configuration files](#copy-configuration-files) - 1. [edit configuration files](#edit-configuration-files) - 1. [verify master high avialiability](#verify-master-high-avialiability) - 1. [keepalived installation](#keepalived-installation) - 1. [nginx load balancer configuration](#nginx-load-balancer-configuration) - 1. [kube-proxy configuration](#kube-proxy-configuration) - 1. [verfify master high avialiability with keepalived](#verfify-master-high-avialiability-with-keepalived) -1. [k8s-nodes join the kubernetes cluster](#k8s-nodes-join-the-kubernetes-cluster) - 1. [use kubeadm to join the cluster](#use-kubeadm-to-join-the-cluster) - 1. [deploy nginx application to verify installation](#deploy-nginx-application-to-verify-installation) - - -### deployment architecture - -#### deployment architecture summary - -![ha logo](../images/ha.png) - ---- -[category](#category) - -#### detail deployment architecture - -![k8s ha](../images/k8s-ha.png) - -* kubernetes components: - -> kube-apiserver: exposes the Kubernetes API. It is the front-end for the Kubernetes control plane. It is designed to scale horizontally – that is, it scales by deploying more instances. - -> etcd: is used as Kubernetes’ backing store. All cluster data is stored here. Always have a backup plan for etcd’s data for your Kubernetes cluster. - - -> kube-scheduler: watches newly created pods that have no node assigned, and selects a node for them to run on. - - -> kube-controller-manager: runs controllers, which are the background threads that handle routine tasks in the cluster. Logically, each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process. - -> kubelet: is the primary node agent. It watches for pods that have been assigned to its node (either by apiserver or via local configuration file) - -> kube-proxy: enables the Kubernetes service abstraction by maintaining network rules on the host and performing connection forwarding. - - -* load balancer - -> keepalived cluster config a virtual IP address (192.168.60.80), this virtual IP address point to k8s-master1, k8s-master2, k8s-master3. - -> nginx service as the load balancer of k8s-master1, k8s-master2, k8s-master3's apiserver. The other nodes kubernetes services connect the keepalived virtual ip address (192.168.60.80) and nginx exposed port (8443) to communicate with the master cluster's apiservers. - ---- -[category](#category) - -#### hosts list - - HostName | IPAddress | Notes | Components - :--- | :--- | :--- | :--- - k8s-master1 | 192.168.60.71 | master node 1 | keepalived, nginx, etcd, kubelet, kube-apiserver, kube-scheduler, kube-proxy, kube-dashboard, heapster - k8s-master2 | 192.168.60.72 | master node 2 | keepalived, nginx, etcd, kubelet, kube-apiserver, kube-scheduler, kube-proxy, kube-dashboard, heapster - k8s-master3 | 192.168.60.73 | master node 3 | keepalived, nginx, etcd, kubelet, kube-apiserver, kube-scheduler, kube-proxy, kube-dashboard, heapster - N/A | 192.168.60.80 | keepalived virtual IP | N/A - k8s-node1 ~ 8 | 192.168.60.81 ~ 88 | 8 worker nodes | kubelet, kube-proxy - ---- -[category](#category) - -### prerequisites - -#### version info - -* Linux version: CentOS 7.3.1611 - -``` -cat /etc/redhat-release -CentOS Linux release 7.3.1611 (Core) -``` - -* docker version: 1.12.6 - -``` -$ docker version -Client: - Version: 1.12.6 - API version: 1.24 - Go version: go1.6.4 - Git commit: 78d1802 - Built: Tue Jan 10 20:20:01 2017 - OS/Arch: linux/amd64 - -Server: - Version: 1.12.6 - API version: 1.24 - Go version: go1.6.4 - Git commit: 78d1802 - Built: Tue Jan 10 20:20:01 2017 - OS/Arch: linux/amd64 -``` - -* kubeadm version: v1.7.0 - -``` -$ kubeadm version -kubeadm version: &version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.0", GitCommit:"d3ada0119e776222f11ec7945e6d860061339aad", GitTreeState:"clean", BuildDate:"2017-06-29T22:55:19Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} -``` - -* kubelet version: v1.7.0 - -``` -$ kubelet --version -Kubernetes v1.7.0 -``` - ---- - -[category](#category) - -#### required docker images - -* on your local laptop MacOSX: pull related docker images - -``` -$ docker pull gcr.io/google_containers/kube-proxy-amd64:v1.7.0 -$ docker pull gcr.io/google_containers/kube-apiserver-amd64:v1.7.0 -$ docker pull gcr.io/google_containers/kube-controller-manager-amd64:v1.7.0 -$ docker pull gcr.io/google_containers/kube-scheduler-amd64:v1.7.0 -$ docker pull gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.4 -$ docker pull gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.4 -$ docker pull gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.4 -$ docker pull nginx:latest -$ docker pull gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.1 -$ docker pull quay.io/coreos/flannel:v0.7.1-amd64 -$ docker pull gcr.io/google_containers/heapster-amd64:v1.3.0 -$ docker pull gcr.io/google_containers/etcd-amd64:3.0.17 -$ docker pull gcr.io/google_containers/heapster-grafana-amd64:v4.0.2 -$ docker pull gcr.io/google_containers/heapster-influxdb-amd64:v1.1.1 -$ docker pull gcr.io/google_containers/pause-amd64:3.0 -``` - -* on your local laptop MacOSX: clone codes from git and change working directory in codes - -``` -$ git clone https://github.com/cookeem/kubeadm-ha -$ cd kubeadm-ha -``` - -* on your local laptop MacOSX: save related docker images in docker-images directory - -``` -$ mkdir -p docker-images -$ docker save -o docker-images/kube-proxy-amd64 gcr.io/google_containers/kube-proxy-amd64:v1.7.0 -$ docker save -o docker-images/kube-apiserver-amd64 gcr.io/google_containers/kube-apiserver-amd64:v1.7.0 -$ docker save -o docker-images/kube-controller-manager-amd64 gcr.io/google_containers/kube-controller-manager-amd64:v1.7.0 -$ docker save -o docker-images/kube-scheduler-amd64 gcr.io/google_containers/kube-scheduler-amd64:v1.7.0 -$ docker save -o docker-images/k8s-dns-sidecar-amd64 gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.4 -$ docker save -o docker-images/k8s-dns-kube-dns-amd64 gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.4 -$ docker save -o docker-images/k8s-dns-dnsmasq-nanny-amd64 gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.4 -$ docker save -o docker-images/heapster-grafana-amd64 gcr.io/google_containers/heapster-grafana-amd64:v4.2.0 -$ docker save -o docker-images/nginx nginx:latest -$ docker save -o docker-images/kubernetes-dashboard-amd64 gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.1 -$ docker save -o docker-images/flannel quay.io/coreos/flannel:v0.7.1-amd64 -$ docker save -o docker-images/heapster-amd64 gcr.io/google_containers/heapster-amd64:v1.3.0 -$ docker save -o docker-images/etcd-amd64 gcr.io/google_containers/etcd-amd64:3.0.17 -$ docker save -o docker-images/heapster-grafana-amd64 gcr.io/google_containers/heapster-grafana-amd64:v4.0.2 -$ docker save -o docker-images/heapster-influxdb-amd64 gcr.io/google_containers/heapster-influxdb-amd64:v1.1.1 -$ docker save -o docker-images/pause-amd64 gcr.io/google_containers/pause-amd64:3.0 -``` - -* on your local laptop MacOSX: copy all codes and docker images directory to all kubernetes nodes - -``` -$ scp -r * root@k8s-master1:/root/kubeadm-ha -$ scp -r * root@k8s-master2:/root/kubeadm-ha -$ scp -r * root@k8s-master3:/root/kubeadm-ha -$ scp -r * root@k8s-node1:/root/kubeadm-ha -$ scp -r * root@k8s-node2:/root/kubeadm-ha -$ scp -r * root@k8s-node3:/root/kubeadm-ha -$ scp -r * root@k8s-node4:/root/kubeadm-ha -$ scp -r * root@k8s-node5:/root/kubeadm-ha -$ scp -r * root@k8s-node6:/root/kubeadm-ha -$ scp -r * root@k8s-node7:/root/kubeadm-ha -$ scp -r * root@k8s-node8:/root/kubeadm-ha -``` - ---- -[category](#category) - -#### system configuration - -* on all kubernetes nodes: add kubernetes' repository - -``` -$ cat < /etc/yum.repos.d/kubernetes.repo -[kubernetes] -name=Kubernetes -baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 -enabled=1 -gpgcheck=1 -repo_gpgcheck=1 -gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg - https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg -EOF -``` - -* on all kubernetes nodes: use yum to update system - -``` -$ yum update -y -``` - -* on all kubernetes nodes: turn off firewalld service - -``` -$ systemctl disable firewalld && systemctl stop firewalld && systemctl status firewalld -``` - -* on all kubernetes nodes: set SELINUX to permissive mode - -``` -$ vi /etc/selinux/config -SELINUX=permissive -``` - -* on all kubernetes nodes: set iptables parameters - -``` -$ vi /etc/sysctl.d/k8s.conf -net.bridge.bridge-nf-call-iptables = 1 -net.bridge.bridge-nf-call-ip6tables = 1 -``` - -* on all kubernetes nodes: reboot host - -``` -$ reboot -``` - ---- -[category](#category) - -### kubernetes installation - -#### kubernetes and related services installation - -* on all kubernetes nodes: check SELINUX mode must set as permissive mode - -``` -$ getenforce -Permissive -``` - -* on all kubernetes nodes: install kubernetes and related services, then start up kubelet and docker daemon - -``` -$ yum search docker --showduplicates -$ yum install docker-1.12.6-16.el7.centos.x86_64 - -$ yum search kubelet --showduplicates -$ yum install kubelet-1.7.0-0.x86_64 - -$ yum search kubeadm --showduplicates -$ yum install kubeadm-1.7.0-0.x86_64 - -$ yum search kubernetes-cni --showduplicates -$ yum install kubernetes-cni-0.5.1-0.x86_64 - -$ systemctl enable docker && systemctl start docker -$ systemctl enable kubelet && systemctl start kubelet -``` - ---- -[category](#category) - -#### load docker images - -* on all kubernetes nodes: load docker images - -``` -$ docker load -i /root/kubeadm-ha/docker-images/etcd-amd64 -$ docker load -i /root/kubeadm-ha/docker-images/flannel -$ docker load -i /root/kubeadm-ha/docker-images/heapster-amd64 -$ docker load -i /root/kubeadm-ha/docker-images/heapster-grafana-amd64 -$ docker load -i /root/kubeadm-ha/docker-images/heapster-influxdb-amd64 -$ docker load -i /root/kubeadm-ha/docker-images/k8s-dns-dnsmasq-nanny-amd64 -$ docker load -i /root/kubeadm-ha/docker-images/k8s-dns-kube-dns-amd64 -$ docker load -i /root/kubeadm-ha/docker-images/k8s-dns-sidecar-amd64 -$ docker load -i /root/kubeadm-ha/docker-images/kube-apiserver-amd64 -$ docker load -i /root/kubeadm-ha/docker-images/kube-controller-manager-amd64 -$ docker load -i /root/kubeadm-ha/docker-images/kube-proxy-amd64 -$ docker load -i /root/kubeadm-ha/docker-images/kubernetes-dashboard-amd64 -$ docker load -i /root/kubeadm-ha/docker-images/kube-scheduler-amd64 -$ docker load -i /root/kubeadm-ha/docker-images/pause-amd64 -$ docker load -i /root/kubeadm-ha/docker-images/nginx - -$ docker images -REPOSITORY TAG IMAGE ID CREATED SIZE -gcr.io/google_containers/kube-proxy-amd64 v1.7.0 d2d44013d0f8 4 days ago 114.7 MB -gcr.io/google_containers/kube-apiserver-amd64 v1.7.0 f0d4b746fb2b 4 days ago 185.2 MB -gcr.io/google_containers/kube-controller-manager-amd64 v1.7.0 36bf73ed0632 4 days ago 137 MB -gcr.io/google_containers/kube-scheduler-amd64 v1.7.0 5c9a7f60a95c 4 days ago 77.16 MB -gcr.io/google_containers/k8s-dns-sidecar-amd64 1.14.4 38bac66034a6 7 days ago 41.81 MB -gcr.io/google_containers/k8s-dns-kube-dns-amd64 1.14.4 a8e00546bcf3 7 days ago 49.38 MB -gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64 1.14.4 f7f45b9cb733 7 days ago 41.41 MB -nginx latest 958a7ae9e569 4 weeks ago 109.4 MB -gcr.io/google_containers/kubernetes-dashboard-amd64 v1.6.1 71dfe833ce74 6 weeks ago 134.4 MB -quay.io/coreos/flannel v0.7.1-amd64 cd4ae0be5e1b 10 weeks ago 77.76 MB -gcr.io/google_containers/heapster-amd64 v1.3.0 f9d33bedfed3 3 months ago 68.11 MB -gcr.io/google_containers/etcd-amd64 3.0.17 243830dae7dd 4 months ago 168.9 MB -gcr.io/google_containers/heapster-grafana-amd64 v4.0.2 a1956d2a1a16 5 months ago 131.5 MB -gcr.io/google_containers/heapster-influxdb-amd64 v1.1.1 d3fccbedd180 5 months ago 11.59 MB -gcr.io/google_containers/pause-amd64 3.0 99e59f495ffa 14 months ago 746.9 kB -``` - ---- -[category](#category) - -### use kubeadm to init first master - -#### deploy independent etcd tls cluster - -* on k8s-master1: use docker to start independent etcd tls cluster - -``` -$ docker stop etcd && docker rm etcd -$ rm -rf /var/lib/etcd-cluster -$ mkdir -p /var/lib/etcd-cluster -$ docker run -d \ ---restart always \ --v /etc/ssl/certs:/etc/ssl/certs \ --v /var/lib/etcd-cluster:/var/lib/etcd \ --p 4001:4001 \ --p 2380:2380 \ --p 2379:2379 \ ---name etcd \ -gcr.io/google_containers/etcd-amd64:3.0.17 \ -etcd --name=etcd0 \ ---advertise-client-urls=http://192.168.60.71:2379,http://192.168.60.71:4001 \ ---listen-client-urls=http://0.0.0.0:2379,http://0.0.0.0:4001 \ ---initial-advertise-peer-urls=http://192.168.60.71:2380 \ ---listen-peer-urls=http://0.0.0.0:2380 \ ---initial-cluster-token=9477af68bbee1b9ae037d6fd9e7efefd \ ---initial-cluster=etcd0=http://192.168.60.71:2380,etcd1=http://192.168.60.72:2380,etcd2=http://192.168.60.73:2380 \ ---initial-cluster-state=new \ ---auto-tls \ ---peer-auto-tls \ ---data-dir=/var/lib/etcd -``` - -* on k8s-master2: use docker to start independent etcd tls cluster - -``` -$ docker stop etcd && docker rm etcd -$ rm -rf /var/lib/etcd-cluster -$ mkdir -p /var/lib/etcd-cluster -$ docker run -d \ ---restart always \ --v /etc/ssl/certs:/etc/ssl/certs \ --v /var/lib/etcd-cluster:/var/lib/etcd \ --p 4001:4001 \ --p 2380:2380 \ --p 2379:2379 \ ---name etcd \ -gcr.io/google_containers/etcd-amd64:3.0.17 \ -etcd --name=etcd1 \ ---advertise-client-urls=http://192.168.60.72:2379,http://192.168.60.72:4001 \ ---listen-client-urls=http://0.0.0.0:2379,http://0.0.0.0:4001 \ ---initial-advertise-peer-urls=http://192.168.60.72:2380 \ ---listen-peer-urls=http://0.0.0.0:2380 \ ---initial-cluster-token=9477af68bbee1b9ae037d6fd9e7efefd \ ---initial-cluster=etcd0=http://192.168.60.71:2380,etcd1=http://192.168.60.72:2380,etcd2=http://192.168.60.73:2380 \ ---initial-cluster-state=new \ ---auto-tls \ ---peer-auto-tls \ ---data-dir=/var/lib/etcd -``` - -* on k8s-master3: use docker to start independent etcd tls cluster - -``` -$ docker stop etcd && docker rm etcd -$ rm -rf /var/lib/etcd-cluster -$ mkdir -p /var/lib/etcd-cluster -$ docker run -d \ ---restart always \ --v /etc/ssl/certs:/etc/ssl/certs \ --v /var/lib/etcd-cluster:/var/lib/etcd \ --p 4001:4001 \ --p 2380:2380 \ --p 2379:2379 \ ---name etcd \ -gcr.io/google_containers/etcd-amd64:3.0.17 \ -etcd --name=etcd2 \ ---advertise-client-urls=http://192.168.60.73:2379,http://192.168.60.73:4001 \ ---listen-client-urls=http://0.0.0.0:2379,http://0.0.0.0:4001 \ ---initial-advertise-peer-urls=http://192.168.60.73:2380 \ ---listen-peer-urls=http://0.0.0.0:2380 \ ---initial-cluster-token=9477af68bbee1b9ae037d6fd9e7efefd \ ---initial-cluster=etcd0=http://192.168.60.71:2380,etcd1=http://192.168.60.72:2380,etcd2=http://192.168.60.73:2380 \ ---initial-cluster-state=new \ ---auto-tls \ ---peer-auto-tls \ ---data-dir=/var/lib/etcd -``` - -* on k8s-master1, k8s-master2, k8s-master3: check etcd cluster health - -``` -$ docker exec -ti etcd ash - -$ etcdctl member list -1a32c2d3f1abcad0: name=etcd2 peerURLs=http://192.168.60.73:2380 clientURLs=http://192.168.60.73:2379,http://192.168.60.73:4001 isLeader=false -1da4f4e8b839cb79: name=etcd1 peerURLs=http://192.168.60.72:2380 clientURLs=http://192.168.60.72:2379,http://192.168.60.72:4001 isLeader=false -4238bcb92d7f2617: name=etcd0 peerURLs=http://192.168.60.71:2380 clientURLs=http://192.168.60.71:2379,http://192.168.60.71:4001 isLeader=true - -$ etcdctl cluster-health -member 1a32c2d3f1abcad0 is healthy: got healthy result from http://192.168.60.73:2379 -member 1da4f4e8b839cb79 is healthy: got healthy result from http://192.168.60.72:2379 -member 4238bcb92d7f2617 is healthy: got healthy result from http://192.168.60.71:2379 -cluster is healthy - -$ exit -``` - ---- -[category](#category) - -#### kubeadm init - -* on k8s-master1: edit kubeadm-init-v1.7.x.yaml file, set etcd.endpoints.${HOST_IP} to k8s-master1, k8s-master2, k8s-master3's IP address. Set apiServerCertSANs.${HOST_IP} to k8s-master1, k8s-master2, k8s-master3's IP address. Set apiServerCertSANs.${HOST_NAME} to k8s-master1, k8s-master2, k8s-master3. Set apiServerCertSANs.${VIRTUAL_IP} to keepalived's virtual IP address - -``` -$ vi /root/kubeadm-ha/kubeadm-init-v1.7.x.yaml -apiVersion: kubeadm.k8s.io/v1alpha1 -kind: MasterConfiguration -kubernetesVersion: v1.7.0 -networking: - podSubnet: 10.244.0.0/16 -apiServerCertSANs: -- k8s-master1 -- k8s-master2 -- k8s-master3 -- 192.168.60.71 -- 192.168.60.72 -- 192.168.60.73 -- 192.168.60.80 -etcd: - endpoints: - - http://192.168.60.71:2379 - - http://192.168.60.72:2379 - - http://192.168.60.73:2379 -``` - -* if kubeadm init stuck at tips below, that may because cgroup-driver parameters different with your docker service's setting -* [apiclient] Created API client, waiting for the control plane to become ready -* use "journalctl -t kubelet -S '2017-06-08'" to check logs, and you will find error below: -* error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "systemd" -* you must change "KUBELET_CGROUP_ARGS=--cgroup-driver=systemd" to "KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs" - -``` -$ vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf -#Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd" -Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs" - -$ systemctl daemon-reload && systemctl restart kubelet -``` - -* on k8s-master1: use kubeadm to init kubernetes cluster and connect external etcd cluster - -``` -$ kubeadm init --config=/root/kubeadm-ha/kubeadm-init-v1.7.x.yaml -``` - -* on k8s-master1: edit kube-apiserver.yaml file's admission-control settings, v1.7.0 use NodeRestriction admission control will prevent other master join the cluster, please reset it to v1.6.x recommended config. - -``` -$ vi /etc/kubernetes/manifests/kube-apiserver.yaml -# - --admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota - - --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds -``` - -* on k8s-master1: restart docker and kubelet services - -``` -$ systemctl restart docker kubelet -``` - -* on k8s-master1: set environment variables $KUBECONFIG, make kubectl connect kubelet - -``` -$ vi ~/.bashrc -export KUBECONFIG=/etc/kubernetes/admin.conf - -$ source ~/.bashrc -``` - ---- -[category](#category) - -#### install flannel networks addon - -* on k8s-master1: install flannel networks addon, otherwise kube-dns pod will keep status at ContainerCreating - -``` -$ kubectl create -f /root/kubeadm-ha/kube-flannel -clusterrole "flannel" created -clusterrolebinding "flannel" created -serviceaccount "flannel" created -configmap "kube-flannel-cfg" created -daemonset "kube-flannel-ds" created -``` - -* on k8s-master1: after flannel networks addon installed, wait about 3 minutes, then all pods status are Running - -``` -$ kubectl get pods --all-namespaces -o wide -NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE -kube-system kube-apiserver-k8s-master1 1/1 Running 0 3m 192.168.60.71 k8s-master1 -kube-system kube-controller-manager-k8s-master1 1/1 Running 0 3m 192.168.60.71 k8s-master1 -kube-system kube-dns-3913472980-k9mt6 3/3 Running 0 4m 10.244.0.104 k8s-master1 -kube-system kube-flannel-ds-3hhjd 2/2 Running 0 1m 192.168.60.71 k8s-master1 -kube-system kube-proxy-rzq3t 1/1 Running 0 4m 192.168.60.71 k8s-master1 -kube-system kube-scheduler-k8s-master1 1/1 Running 0 3m 192.168.60.71 k8s-master1 -``` - ---- -[category](#category) - -#### install dashboard addon - -* on k8s-master1: install dashboard webUI addon - -``` -$ kubectl create -f /root/kubeadm-ha/kube-dashboard/ -serviceaccount "kubernetes-dashboard" created -clusterrolebinding "kubernetes-dashboard" created -deployment "kubernetes-dashboard" created -service "kubernetes-dashboard" created -``` - -* on k8s-master1: start up proxy - -``` -$ kubectl proxy --address='0.0.0.0' & -``` - -* on your local laptop MacOSX: use browser to check dashboard work correctly - -``` -http://k8s-master1:30000 -``` - -![dashboard](images/dashboard.png) - ---- -[category](#category) - -#### install heapster addon - -* on k8s-master1: make master be able to schedule pods - -``` -$ kubectl taint nodes --all node-role.kubernetes.io/master- -node "k8s-master1" tainted -``` - -* on k8s-master1: install heapster addon, the performance monitor addon - -``` -$ kubectl create -f /root/kubeadm-ha/kube-heapster -``` - -* on k8s-master1: restart docker and kubelet service, to make heapster work immediately - -``` -$ systemctl restart docker kubelet -``` - -* on k8s-master1: check pods status - -``` -$ kubectl get all --all-namespaces -o wide -NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE -kube-system heapster-783524908-kn6jd 1/1 Running 1 9m 10.244.0.111 k8s-master1 -kube-system kube-apiserver-k8s-master1 1/1 Running 1 15m 192.168.60.71 k8s-master1 -kube-system kube-controller-manager-k8s-master1 1/1 Running 1 15m 192.168.60.71 k8s-master1 -kube-system kube-dns-3913472980-k9mt6 3/3 Running 3 16m 10.244.0.110 k8s-master1 -kube-system kube-flannel-ds-3hhjd 2/2 Running 3 13m 192.168.60.71 k8s-master1 -kube-system kube-proxy-rzq3t 1/1 Running 1 16m 192.168.60.71 k8s-master1 -kube-system kube-scheduler-k8s-master1 1/1 Running 1 15m 192.168.60.71 k8s-master1 -kube-system kubernetes-dashboard-2039414953-d46vw 1/1 Running 1 11m 10.244.0.109 k8s-master1 -kube-system monitoring-grafana-3975459543-8l94z 1/1 Running 1 9m 10.244.0.112 k8s-master1 -kube-system monitoring-influxdb-3480804314-72ltf 1/1 Running 1 9m 10.244.0.113 k8s-master1 -``` - -* on your local laptop MacOSX: use browser to check dashboard, if it show CPU and Memory Usage info, then heapster work! - -``` -http://k8s-master1:30000 -``` - -![heapster](images/heapster.png) - -* now we finish the first kubernetes master installation, and flannel dashboard heapster work on master correctly - ---- -[category](#category) - -### kubernetes masters high avialiability configuration - -#### copy configuration files - -* on k8s-master1: copy /etc/kubernetes/ directory to k8s-master2 and k8s-master3 - -``` -scp -r /etc/kubernetes/ k8s-master2:/etc/ -scp -r /etc/kubernetes/ k8s-master3:/etc/ -``` - -* on k8s-master2, k8s-master3: restart kubelet service, and make sure kubelet status is active (running) - -``` -$ systemctl daemon-reload && systemctl restart kubelet - -$ systemctl status kubelet -● kubelet.service - kubelet: The Kubernetes Node Agent - Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled) - Drop-In: /etc/systemd/system/kubelet.service.d - └─10-kubeadm.conf - Active: active (running) since Tue 2017-06-27 16:24:22 CST; 1 day 17h ago - Docs: http://kubernetes.io/docs/ - Main PID: 2780 (kubelet) - Memory: 92.9M - CGroup: /system.slice/kubelet.service - ├─2780 /usr/bin/kubelet --kubeconfig=/etc/kubernetes/kubelet.conf --require-... - └─2811 journalctl -k -f -``` - -* on k8s-master2, k8s-master3: set environment variables $KUBECONFIG, make kubectl connect kubelet - -``` -$ vi ~/.bashrc -export KUBECONFIG=/etc/kubernetes/admin.conf - -$ source ~/.bashrc -``` - -* on k8s-master2, k8s-master3: check nodes status, you will found that k8s-master2 and k8s-master3 are joined - -``` -$ kubectl get nodes -o wide -NAME STATUS AGE VERSION EXTERNAL-IP OS-IMAGE KERNEL-VERSION -k8s-master1 Ready 26m v1.7.0 CentOS Linux 7 (Core) 3.10.0-514.6.1.el7.x86_64 -k8s-master2 Ready 2m v1.7.0 CentOS Linux 7 (Core) 3.10.0-514.21.1.el7.x86_64 -k8s-master3 Ready 2m v1.7.0 CentOS Linux 7 (Core) 3.10.0-514.21.1.el7.x86_64 -``` - ---- -[category](#category) - -#### edit configuration files - -* on k8s-master2, k8s-master3: edit kube-apiserver.yaml file, replace ${HOST_IP} to current host's IP address - -``` -$ vi /etc/kubernetes/manifests/kube-apiserver.yaml - - --advertise-address=${HOST_IP} -``` - -* on k8s-master2, k8s-master3: edit kubelet.conf file, replace ${HOST_IP} to current host's IP address - -``` -$ vi /etc/kubernetes/kubelet.conf -server: https://${HOST_IP}:6443 -``` - -* on k8s-master2, k8s-master3: edit admin.conf file, replace ${HOST_IP} to current host's IP address - -``` -$ vi /etc/kubernetes/admin.conf - server: https://${HOST_IP}:6443 -``` - -* on k8s-master2, k8s-master3: edit controller-manager.conf file, replace ${HOST_IP} to current host's IP address - -``` -$ vi /etc/kubernetes/controller-manager.conf - server: https://${HOST_IP}:6443 -``` - -* on k8s-master2, k8s-master3: edit scheduler.conf file, replace ${HOST_IP} to current host's IP address - -``` -$ vi /etc/kubernetes/scheduler.conf - server: https://${HOST_IP}:6443 -``` - -* on k8s-master1, k8s-master2, k8s-master3: restart docker and kubelet services - -``` -$ systemctl daemon-reload && systemctl restart docker kubelet -``` - ---- -[category](#category) - -#### verify master high avialiability - -* on k8s-master1 or k8s-master2 or k8s-master3: check all master nodes pods startup status. apiserver controller-manager kube-scheduler proxy flannel running at k8s-master1, k8s-master2, k8s-master3 successfully. - -``` -$ kubectl get pod --all-namespaces -o wide | grep k8s-master2 -kube-system kube-apiserver-k8s-master2 1/1 Running 1 55s 192.168.60.72 k8s-master2 -kube-system kube-controller-manager-k8s-master2 1/1 Running 2 18m 192.168.60.72 k8s-master2 -kube-system kube-flannel-ds-t8gkh 2/2 Running 4 18m 192.168.60.72 k8s-master2 -kube-system kube-proxy-bpgqw 1/1 Running 1 18m 192.168.60.72 k8s-master2 -kube-system kube-scheduler-k8s-master2 1/1 Running 2 18m 192.168.60.72 k8s-master2 - -$ kubectl get pod --all-namespaces -o wide | grep k8s-master3 -kube-system kube-apiserver-k8s-master3 1/1 Running 1 1m 192.168.60.73 k8s-master3 -kube-system kube-controller-manager-k8s-master3 1/1 Running 2 18m 192.168.60.73 k8s-master3 -kube-system kube-flannel-ds-tmqmx 2/2 Running 4 18m 192.168.60.73 k8s-master3 -kube-system kube-proxy-4stg3 1/1 Running 1 18m 192.168.60.73 k8s-master3 -kube-system kube-scheduler-k8s-master3 1/1 Running 2 18m 192.168.60.73 k8s-master3 -``` - -* on k8s-master1 or k8s-master2 or k8s-master3: use kubectl logs to check controller-manager and scheduler's leader election result, only one is working - -``` -$ kubectl logs -n kube-system kube-controller-manager-k8s-master1 -$ kubectl logs -n kube-system kube-controller-manager-k8s-master2 -$ kubectl logs -n kube-system kube-controller-manager-k8s-master3 - -$ kubectl logs -n kube-system kube-scheduler-k8s-master1 -$ kubectl logs -n kube-system kube-scheduler-k8s-master2 -$ kubectl logs -n kube-system kube-scheduler-k8s-master3 -``` - -* on k8s-master1 or k8s-master2 or k8s-master3: check deployment - -``` -$ kubectl get deploy --all-namespaces -NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE -kube-system heapster 1 1 1 1 41m -kube-system kube-dns 1 1 1 1 48m -kube-system kubernetes-dashboard 1 1 1 1 43m -kube-system monitoring-grafana 1 1 1 1 41m -kube-system monitoring-influxdb 1 1 1 1 41m -``` - -* on k8s-master1 or k8s-master2 or k8s-master3: scale up kubernetes-dashboard and kube-dns replicas to 3, make all master running kubernetes-dashboard and kube-dns - -``` -$ kubectl scale --replicas=3 -n kube-system deployment/kube-dns -$ kubectl get pods --all-namespaces -o wide| grep kube-dns - -$ kubectl scale --replicas=3 -n kube-system deployment/kubernetes-dashboard -$ kubectl get pods --all-namespaces -o wide| grep kubernetes-dashboard - -$ kubectl scale --replicas=3 -n kube-system deployment/heapster -$ kubectl get pods --all-namespaces -o wide| grep heapster - -$ kubectl scale --replicas=3 -n kube-system deployment/monitoring-grafana -$ kubectl get pods --all-namespaces -o wide| grep monitoring-grafana - -$ kubectl scale --replicas=3 -n kube-system deployment/monitoring-influxdb -$ kubectl get pods --all-namespaces -o wide| grep monitoring-influxdb -``` - ---- -[category](#category) - -#### keepalived installation - -* on k8s-master1, k8s-master2, k8s-master3: install keepalived service - -``` -$ yum install -y keepalived - -$ systemctl enable keepalived && systemctl restart keepalived -``` - -* on k8s-master1, k8s-master2, k8s-master3: backup keepalived config file - -``` -$ mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak -``` - -* on k8s-master1, k8s-master2, k8s-master3: create apiserver monitoring script, when apiserver failed keepalived will stop and virtual IP address will transfer to the other node - -``` -$ vi /etc/keepalived/check_apiserver.sh -#!/bin/bash -err=0 -for k in $( seq 1 10 ) -do - check_code=$(ps -ef|grep kube-apiserver | wc -l) - if [ "$check_code" = "1" ]; then - err=$(expr $err + 1) - sleep 5 - continue - else - err=0 - break - fi -done -if [ "$err" != "0" ]; then - echo "systemctl stop keepalived" - /usr/bin/systemctl stop keepalived - exit 1 -else - exit 0 -fi - -chmod a+x /etc/keepalived/check_apiserver.sh -``` - -* on k8s-master1, k8s-master2, k8s-master3: check the network interface name - -``` -$ ip a | grep 192.168.60 -``` - -* on k8s-master1, k8s-master2, k8s-master3: edit keepalived settings: -* state ${STATE}: is MASTER or BACKUP, only one node can set to MASTER -* interface ${INTERFACE_NAME}: which network interfaces will virtual IP address bind on -* mcast_src_ip ${HOST_IP}: current host IP address -* priority ${PRIORITY}: for example (102 or 101 or 100) -* ${VIRTUAL_IP}: the virtual IP address, here we set to 192.168.60.80 - -``` -$ vi /etc/keepalived/keepalived.conf -! Configuration File for keepalived -global_defs { - router_id LVS_DEVEL -} -vrrp_script chk_apiserver { - script "/etc/keepalived/check_apiserver.sh" - interval 2 - weight -5 - fall 3 - rise 2 -} -vrrp_instance VI_1 { - state ${STATE} - interface ${INTERFACE_NAME} - mcast_src_ip ${HOST_IP} - virtual_router_id 51 - priority ${PRIORITY} - advert_int 2 - authentication { - auth_type PASS - auth_pass 4be37dc3b4c90194d1600c483e10ad1d - } - virtual_ipaddress { - ${VIRTUAL_IP} - } - track_script { - chk_apiserver - } -} -``` - -* on k8s-master1, k8s-master2, k8s-master3: reboot keepalived service, and check virtual IP address work or not - -``` -$ systemctl restart keepalived -$ ping 192.168.60.80 -``` - ---- -[category](#category) - -#### nginx load balancer configuration - -* on k8s-master1, k8s-master2, k8s-master3: edit nginx-default.conf settings, replace ${HOST_IP} with k8s-master1, k8s-master2, k8s-master3's IP address. - -``` -$ vi /root/kubeadm-ha/nginx-default.conf -stream { - upstream apiserver { - server ${HOST_IP}:6443 weight=5 max_fails=3 fail_timeout=30s; - server ${HOST_IP}:6443 weight=5 max_fails=3 fail_timeout=30s; - server ${HOST_IP}:6443 weight=5 max_fails=3 fail_timeout=30s; - } - - server { - listen 8443; - proxy_connect_timeout 1s; - proxy_timeout 3s; - proxy_pass apiserver; - } -} -``` - -* on k8s-master1, k8s-master2, k8s-master3: use docker to start up nginx - -``` -$ docker run -d -p 8443:8443 \ ---name nginx-lb \ ---restart always \ --v /root/kubeadm-ha/nginx-default.conf:/etc/nginx/nginx.conf \ -nginx -``` - -* on k8s-master1, k8s-master2, k8s-master3: check keepalived and nginx - -``` -$ curl -L 192.168.60.80:8443 | wc -l - % Total % Received % Xferd Average Speed Time Time Time Current - Dload Upload Total Spent Left Speed -100 14 0 14 0 0 18324 0 --:--:-- --:--:-- --:--:-- 14000 -1 -``` - -* on k8s-master1, k8s-master2, k8s-master3: check keeplived logs, if it show logs below it means that virtual IP address bind on this host - -``` -$ systemctl status keepalived -l -VRRP_Instance(VI_1) Sending gratuitous ARPs on ens160 for 192.168.60.80 -``` - ---- -[category](#category) - -#### kube-proxy configuration - -* on k8s-master1: edit kube-proxy settings to use keepalived virtual IP address - -``` -$ kubectl get -n kube-system configmap -NAME DATA AGE -extension-apiserver-authentication 6 4h -kube-flannel-cfg 2 4h -kube-proxy 1 4h -``` - -* on k8s-master1: edit configmap/kube-proxy settings, replaces the IP address to keepalived's virtual IP address - -``` -$ kubectl edit -n kube-system configmap/kube-proxy - server: https://192.168.60.80:8443 -``` - -* on k8s-master1: check configmap/kube-proxy settings - -``` -$ kubectl get -n kube-system configmap/kube-proxy -o yaml -``` - -* on k8s-master1: delete all kube-proxy pods, kube-proxy pods will re-create automatically - -``` -kubectl get pods --all-namespaces -o wide | grep proxy -``` - -* on k8s-master1, k8s-master2, k8s-master3: restart docker kubelet keepalived services - -``` -$ systemctl restart docker kubelet keepalived -``` - ---- -[category](#category) - -#### verfify master high avialiability with keepalived - -* on k8s-master1: check each master nodes pods status - -``` -$ kubectl get pods --all-namespaces -o wide | grep k8s-master1 - -$ kubectl get pods --all-namespaces -o wide | grep k8s-master2 - -$ kubectl get pods --all-namespaces -o wide | grep k8s-master3 -``` - ---- -[category](#category) - -### k8s-nodes join the kubernetes cluster - -#### use kubeadm to join the cluster -* on k8s-master1: make master nodes scheduling pods disabled - -``` -$ kubectl patch node k8s-master1 -p '{"spec":{"unschedulable":true}}' - -$ kubectl patch node k8s-master2 -p '{"spec":{"unschedulable":true}}' - -$ kubectl patch node k8s-master3 -p '{"spec":{"unschedulable":true}}' -``` - -* on k8s-master1: list kubeadm token - -``` -$ kubeadm token list -TOKEN TTL EXPIRES USAGES DESCRIPTION -xxxxxx.yyyyyy authentication,signing The default bootstrap token generated by 'kubeadm init' -``` - -* on k8s-node1 ~ k8s-node8: use kubeadm to join the kubernetes cluster, replace ${TOKEN} with token show ahead, replace ${VIRTUAL_IP} with keepalived's virtual IP address (192.168.60.80) - -``` -$ kubeadm join --token ${TOKEN} ${VIRTUAL_IP}:8443 -``` - ---- -[category](#category) - -#### deploy nginx application to verify installation - -* on k8s-node1 ~ k8s-node8: check kubelet status - -``` -$ systemctl status kubelet -● kubelet.service - kubelet: The Kubernetes Node Agent - Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled) - Drop-In: /etc/systemd/system/kubelet.service.d - └─10-kubeadm.conf - Active: active (running) since Tue 2017-06-27 16:23:43 CST; 1 day 18h ago - Docs: http://kubernetes.io/docs/ - Main PID: 1146 (kubelet) - Memory: 204.9M - CGroup: /system.slice/kubelet.service - ├─ 1146 /usr/bin/kubelet --kubeconfig=/etc/kubernetes/kubelet.conf --require... - ├─ 2553 journalctl -k -f - ├─ 4988 /usr/sbin/glusterfs --log-level=ERROR --log-file=/var/lib/kubelet/pl... - └─14720 /usr/sbin/glusterfs --log-level=ERROR --log-file=/var/lib/kubelet/pl... -``` - -* on k8s-master1: list nodes status - -``` -$ kubectl get nodes -o wide -NAME STATUS AGE VERSION -k8s-master1 Ready,SchedulingDisabled 5h v1.7.0 -k8s-master2 Ready,SchedulingDisabled 4h v1.7.0 -k8s-master3 Ready,SchedulingDisabled 4h v1.7.0 -k8s-node1 Ready 6m v1.7.0 -k8s-node2 Ready 4m v1.7.0 -k8s-node3 Ready 4m v1.7.0 -k8s-node4 Ready 3m v1.7.0 -k8s-node5 Ready 3m v1.7.0 -k8s-node6 Ready 3m v1.7.0 -k8s-node7 Ready 3m v1.7.0 -k8s-node8 Ready 3m v1.7.0 -``` - -* on k8s-master1: deploy nginx service on kubernetes, it show that nginx service deploy on k8s-node5 - -``` -$ kubectl run nginx --image=nginx --port=80 -deployment "nginx" created - -$ kubectl get pod -o wide -l=run=nginx -NAME READY STATUS RESTARTS AGE IP NODE -nginx-2662403697-pbmwt 1/1 Running 0 5m 10.244.7.6 k8s-node5 -``` - -* on k8s-master1: expose nginx services port - -``` -$ kubectl expose deployment nginx --port=80 --target-port=80 --type=NodePort -service "nginx" exposed - -$ kubectl get svc -l=run=nginx -NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE -nginx 10.105.151.69 80:31639/TCP 43s - -$ curl k8s-master2:31639 - - - -Welcome to nginx! - - - -

Welcome to nginx!

-

If you see this page, the nginx web server is successfully installed and -working. Further configuration is required.

- -

For online documentation and support please refer to -nginx.org.
-Commercial support is available at -nginx.com.

- -

Thank you for using nginx.

- - - -``` - -* congratulation! kubernetes high availiability cluster deploy successfully 😀 ---- -[category](#category) - diff --git a/v1.7/README_CN.md b/v1.7/README_CN.md deleted file mode 100644 index 9bf212b..0000000 --- a/v1.7/README_CN.md +++ /dev/null @@ -1,1152 +0,0 @@ -# kubeadm-highavailiability - 基于kubeadm的kubernetes高可用集群部署,支持v1.11.x v1.9.x v1.7.x v1.6.x版本 - -![k8s logo](../images/Kubernetes.png) - -- [中文文档(for v1.11.x版本)](../README_CN.md) -- [English document(for v1.11.x version)](../README.md) -- [中文文档(for v1.9.x版本)](../v1.9/README_CN.md) -- [English document(for v1.9.x version)](../v1.9/README.md) -- [中文文档(for v1.7.x版本)](../v1.7/README_CN.md) -- [English document(for v1.7.x version)](../v1.7/README.md) -- [中文文档(for v1.6.x版本)](../v1.6/README_CN.md) -- [English document(for v1.6.x version)](../v1.6/README.md) - ---- - -- [GitHub项目地址](https://github.com/cookeem/kubeadm-ha/) -- [OSChina项目地址](https://git.oschina.net/cookeem/kubeadm-ha/) - ---- - -- 该指引适用于v1.7.x版本的kubernetes集群 - -### 目录 - -1. [部署架构](#部署架构) - 1. [概要部署架构](#概要部署架构) - 1. [详细部署架构](#详细部署架构) - 1. [主机节点清单](#主机节点清单) -1. [安装前准备](#安装前准备) - 1. [版本信息](#版本信息) - 1. [所需docker镜像](#所需docker镜像) - 1. [系统设置](#系统设置) -1. [kubernetes安装](#kubernetes安装) - 1. [kubernetes相关服务安装](#kubernetes相关服务安装) - 1. [docker镜像导入](#docker镜像导入) -1. [第一台master初始化](#第一台master初始化) - 1. [独立etcd集群部署](#独立etcd集群部署) - 1. [kubeadm初始化](#kubeadm初始化) - 1. [flannel网络组件安装](#flannel网络组件安装) - 1. [dashboard组件安装](#dashboard组件安装) - 1. [heapster组件安装](#heapster组件安装) -1. [master集群高可用设置](#master集群高可用设置) - 1. [复制配置](#复制配置) - 1. [修改配置](#修改配置) - 1. [验证高可用安装](#验证高可用安装) - 1. [keepalived安装配置](#keepalived安装配置) - 1. [nginx负载均衡配置](#nginx负载均衡配置) - 1. [kube-proxy配置](#kube-proxy配置) - 1. [验证master集群高可用](#验证master集群高可用) -1. [node节点加入高可用集群设置](#node节点加入高可用集群设置) - 1. [kubeadm加入高可用集群](#kubeadm加入高可用集群) - 1. [部署应用验证集群](#部署应用验证集群) - - -### 部署架构 - -#### 概要部署架构 - -![ha logo](../images/ha.png) - -* kubernetes高可用的核心架构是master的高可用,kubectl、客户端以及nodes访问load balancer实现高可用。 - ---- -[返回目录](#目录) - -#### 详细部署架构 - -![k8s ha](../images/k8s-ha.png) - -* kubernetes组件说明 - -> kube-apiserver:集群核心,集群API接口、集群各个组件通信的中枢;集群安全控制; - -> etcd:集群的数据中心,用于存放集群的配置以及状态信息,非常重要,如果数据丢失那么集群将无法恢复;因此高可用集群部署首先就是etcd是高可用集群; - -> kube-scheduler:集群Pod的调度中心;默认kubeadm安装情况下--leader-elect参数已经设置为true,保证master集群中只有一个kube-scheduler处于活跃状态; - -> kube-controller-manager:集群状态管理器,当集群状态与期望不同时,kcm会努力让集群恢复期望状态,比如:当一个pod死掉,kcm会努力新建一个pod来恢复对应replicas set期望的状态;默认kubeadm安装情况下--leader-elect参数已经设置为true,保证master集群中只有一个kube-controller-manager处于活跃状态; - -> kubelet: kubernetes node agent,负责与node上的docker engine打交道; - -> kube-proxy: 每个node上一个,负责service vip到endpoint pod的流量转发,当前主要通过设置iptables规则实现。 - -* 负载均衡 - -> keepalived集群设置一个虚拟ip地址,虚拟ip地址指向k8s-master1、k8s-master2、k8s-master3。 - -> nginx用于k8s-master1、k8s-master2、k8s-master3的apiserver的负载均衡。外部kubectl以及nodes访问apiserver的时候就可以用过keepalived的虚拟ip(192.168.60.80)以及nginx端口(8443)访问master集群的apiserver。 - ---- -[返回目录](#目录) - -#### 主机节点清单 - - 主机名 | IP地址 | 说明 | 组件 - :--- | :--- | :--- | :--- - k8s-master1 | 192.168.60.71 | master节点1 | keepalived、nginx、etcd、kubelet、kube-apiserver、kube-scheduler、kube-proxy、kube-dashboard、heapster - k8s-master2 | 192.168.60.72 | master节点2 | keepalived、nginx、etcd、kubelet、kube-apiserver、kube-scheduler、kube-proxy、kube-dashboard、heapster - k8s-master3 | 192.168.60.73 | master节点3 | keepalived、nginx、etcd、kubelet、kube-apiserver、kube-scheduler、kube-proxy、kube-dashboard、heapster - 无 | 192.168.60.80 | keepalived虚拟IP | 无 - k8s-node1 ~ 8 | 192.168.60.81 ~ 88 | 8个node节点 | kubelet、kube-proxy - ---- -[返回目录](#目录) - -### 安装前准备 - -#### 版本信息 - -* Linux版本:CentOS 7.3.1611 - -``` -cat /etc/redhat-release -CentOS Linux release 7.3.1611 (Core) -``` - -* docker版本:1.12.6 - -``` -$ docker version -Client: - Version: 1.12.6 - API version: 1.24 - Go version: go1.6.4 - Git commit: 78d1802 - Built: Tue Jan 10 20:20:01 2017 - OS/Arch: linux/amd64 - -Server: - Version: 1.12.6 - API version: 1.24 - Go version: go1.6.4 - Git commit: 78d1802 - Built: Tue Jan 10 20:20:01 2017 - OS/Arch: linux/amd64 -``` - -* kubeadm版本:v1.7.0 - -``` -$ kubeadm version -kubeadm version: &version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.0", GitCommit:"d3ada0119e776222f11ec7945e6d860061339aad", GitTreeState:"clean", BuildDate:"2017-06-29T22:55:19Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} -``` - -* kubelet版本:v1.7.0 - -``` -$ kubelet --version -Kubernetes v1.7.0 -``` - ---- - -[返回目录](#目录) - -#### 所需docker镜像 - -* 国内可以使用daocloud加速器下载相关镜像,然后通过docker save、docker load把本地下载的镜像放到kubernetes集群的所在机器上,daocloud加速器链接如下: - -[https://www.daocloud.io/mirror#accelerator-doc](https://www.daocloud.io/mirror#accelerator-doc) - -* 在本机MacOSX上pull相关docker镜像 - -``` -$ docker pull gcr.io/google_containers/kube-proxy-amd64:v1.7.0 -$ docker pull gcr.io/google_containers/kube-apiserver-amd64:v1.7.0 -$ docker pull gcr.io/google_containers/kube-controller-manager-amd64:v1.7.0 -$ docker pull gcr.io/google_containers/kube-scheduler-amd64:v1.7.0 -$ docker pull gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.4 -$ docker pull gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.4 -$ docker pull gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.4 -$ docker pull nginx:latest -$ docker pull gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.1 -$ docker pull quay.io/coreos/flannel:v0.7.1-amd64 -$ docker pull gcr.io/google_containers/heapster-amd64:v1.3.0 -$ docker pull gcr.io/google_containers/etcd-amd64:3.0.17 -$ docker pull gcr.io/google_containers/heapster-grafana-amd64:v4.0.2 -$ docker pull gcr.io/google_containers/heapster-influxdb-amd64:v1.1.1 -$ docker pull gcr.io/google_containers/pause-amd64:3.0 -``` - -* 在本机MacOSX上获取代码,并进入代码目录 - -``` -$ git clone https://github.com/cookeem/kubeadm-ha -$ cd kubeadm-ha -``` - -* 在本机MacOSX上把相关docker镜像保存成文件 - -``` -$ mkdir -p docker-images -$ docker save -o docker-images/kube-proxy-amd64 gcr.io/google_containers/kube-proxy-amd64:v1.7.0 -$ docker save -o docker-images/kube-apiserver-amd64 gcr.io/google_containers/kube-apiserver-amd64:v1.7.0 -$ docker save -o docker-images/kube-controller-manager-amd64 gcr.io/google_containers/kube-controller-manager-amd64:v1.7.0 -$ docker save -o docker-images/kube-scheduler-amd64 gcr.io/google_containers/kube-scheduler-amd64:v1.7.0 -$ docker save -o docker-images/k8s-dns-sidecar-amd64 gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.4 -$ docker save -o docker-images/k8s-dns-kube-dns-amd64 gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.4 -$ docker save -o docker-images/k8s-dns-dnsmasq-nanny-amd64 gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.4 -$ docker save -o docker-images/heapster-grafana-amd64 gcr.io/google_containers/heapster-grafana-amd64:v4.2.0 -$ docker save -o docker-images/nginx nginx:latest -$ docker save -o docker-images/kubernetes-dashboard-amd64 gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.1 -$ docker save -o docker-images/flannel quay.io/coreos/flannel:v0.7.1-amd64 -$ docker save -o docker-images/heapster-amd64 gcr.io/google_containers/heapster-amd64:v1.3.0 -$ docker save -o docker-images/etcd-amd64 gcr.io/google_containers/etcd-amd64:3.0.17 -$ docker save -o docker-images/heapster-grafana-amd64 gcr.io/google_containers/heapster-grafana-amd64:v4.0.2 -$ docker save -o docker-images/heapster-influxdb-amd64 gcr.io/google_containers/heapster-influxdb-amd64:v1.1.1 -$ docker save -o docker-images/pause-amd64 gcr.io/google_containers/pause-amd64:3.0 -``` - -* 在本机MacOSX上把代码以及docker镜像复制到所有节点上 - -``` -$ scp -r * root@k8s-master1:/root/kubeadm-ha -$ scp -r * root@k8s-master2:/root/kubeadm-ha -$ scp -r * root@k8s-master3:/root/kubeadm-ha -$ scp -r * root@k8s-node1:/root/kubeadm-ha -$ scp -r * root@k8s-node2:/root/kubeadm-ha -$ scp -r * root@k8s-node3:/root/kubeadm-ha -$ scp -r * root@k8s-node4:/root/kubeadm-ha -$ scp -r * root@k8s-node5:/root/kubeadm-ha -$ scp -r * root@k8s-node6:/root/kubeadm-ha -$ scp -r * root@k8s-node7:/root/kubeadm-ha -$ scp -r * root@k8s-node8:/root/kubeadm-ha -``` - ---- -[返回目录](#目录) - -#### 系统设置 - -* 以下在kubernetes所有节点上都是使用root用户进行操作 - -* 在kubernetes所有节点上增加kubernetes仓库 - -``` -$ cat < /etc/yum.repos.d/kubernetes.repo -[kubernetes] -name=Kubernetes -baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 -enabled=1 -gpgcheck=1 -repo_gpgcheck=1 -gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg - https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg -EOF -``` - -* 在kubernetes所有节点上进行系统更新 - -``` -$ yum update -y -``` - -* 在kubernetes所有节点上关闭防火墙 - -``` -$ systemctl disable firewalld && systemctl stop firewalld && systemctl status firewalld -``` - -* 在kubernetes所有节点上设置SELINUX为permissive模式 - -``` -$ vi /etc/selinux/config -SELINUX=permissive -``` - -* 在kubernetes所有节点上设置iptables参数,否则kubeadm init会提示错误 - -``` -$ vi /etc/sysctl.d/k8s.conf -net.bridge.bridge-nf-call-iptables = 1 -net.bridge.bridge-nf-call-ip6tables = 1 -``` - -* 在kubernetes所有节点上重启主机 - -``` -$ reboot -``` - ---- -[返回目录](#目录) - -### kubernetes安装 - -#### kubernetes相关服务安装 - -* 在kubernetes所有节点上验证SELINUX模式,必须保证SELINUX为permissive模式,否则kubernetes启动会出现各种异常 - -``` -$ getenforce -Permissive -``` - -* 在kubernetes所有节点上安装并启动kubernetes - -``` -$ yum search docker --showduplicates -$ yum install docker-1.12.6-16.el7.centos.x86_64 - -$ yum search kubelet --showduplicates -$ yum install kubelet-1.7.0-0.x86_64 - -$ yum search kubeadm --showduplicates -$ yum install kubeadm-1.7.0-0.x86_64 - -$ yum search kubernetes-cni --showduplicates -$ yum install kubernetes-cni-0.5.1-0.x86_64 - -$ systemctl enable docker && systemctl start docker -$ systemctl enable kubelet && systemctl start kubelet -``` - ---- -[返回目录](#目录) - -#### docker镜像导入 - -* 在kubernetes所有节点上导入docker镜像 - -``` -$ docker load -i /root/kubeadm-ha/docker-images/etcd-amd64 -$ docker load -i /root/kubeadm-ha/docker-images/flannel -$ docker load -i /root/kubeadm-ha/docker-images/heapster-amd64 -$ docker load -i /root/kubeadm-ha/docker-images/heapster-grafana-amd64 -$ docker load -i /root/kubeadm-ha/docker-images/heapster-influxdb-amd64 -$ docker load -i /root/kubeadm-ha/docker-images/k8s-dns-dnsmasq-nanny-amd64 -$ docker load -i /root/kubeadm-ha/docker-images/k8s-dns-kube-dns-amd64 -$ docker load -i /root/kubeadm-ha/docker-images/k8s-dns-sidecar-amd64 -$ docker load -i /root/kubeadm-ha/docker-images/kube-apiserver-amd64 -$ docker load -i /root/kubeadm-ha/docker-images/kube-controller-manager-amd64 -$ docker load -i /root/kubeadm-ha/docker-images/kube-proxy-amd64 -$ docker load -i /root/kubeadm-ha/docker-images/kubernetes-dashboard-amd64 -$ docker load -i /root/kubeadm-ha/docker-images/kube-scheduler-amd64 -$ docker load -i /root/kubeadm-ha/docker-images/pause-amd64 -$ docker load -i /root/kubeadm-ha/docker-images/nginx - -$ docker images -REPOSITORY TAG IMAGE ID CREATED SIZE -gcr.io/google_containers/kube-proxy-amd64 v1.7.0 d2d44013d0f8 4 days ago 114.7 MB -gcr.io/google_containers/kube-apiserver-amd64 v1.7.0 f0d4b746fb2b 4 days ago 185.2 MB -gcr.io/google_containers/kube-controller-manager-amd64 v1.7.0 36bf73ed0632 4 days ago 137 MB -gcr.io/google_containers/kube-scheduler-amd64 v1.7.0 5c9a7f60a95c 4 days ago 77.16 MB -gcr.io/google_containers/k8s-dns-sidecar-amd64 1.14.4 38bac66034a6 7 days ago 41.81 MB -gcr.io/google_containers/k8s-dns-kube-dns-amd64 1.14.4 a8e00546bcf3 7 days ago 49.38 MB -gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64 1.14.4 f7f45b9cb733 7 days ago 41.41 MB -nginx latest 958a7ae9e569 4 weeks ago 109.4 MB -gcr.io/google_containers/kubernetes-dashboard-amd64 v1.6.1 71dfe833ce74 6 weeks ago 134.4 MB -quay.io/coreos/flannel v0.7.1-amd64 cd4ae0be5e1b 10 weeks ago 77.76 MB -gcr.io/google_containers/heapster-amd64 v1.3.0 f9d33bedfed3 3 months ago 68.11 MB -gcr.io/google_containers/etcd-amd64 3.0.17 243830dae7dd 4 months ago 168.9 MB -gcr.io/google_containers/heapster-grafana-amd64 v4.0.2 a1956d2a1a16 5 months ago 131.5 MB -gcr.io/google_containers/heapster-influxdb-amd64 v1.1.1 d3fccbedd180 5 months ago 11.59 MB -gcr.io/google_containers/pause-amd64 3.0 99e59f495ffa 14 months ago 746.9 kB -``` - ---- -[返回目录](#目录) - -### 第一台master初始化 - -#### 独立etcd集群部署 - -* 在k8s-master1节点上以docker方式启动etcd集群 - -``` -$ docker stop etcd && docker rm etcd -$ rm -rf /var/lib/etcd-cluster -$ mkdir -p /var/lib/etcd-cluster -$ docker run -d \ ---restart always \ --v /etc/ssl/certs:/etc/ssl/certs \ --v /var/lib/etcd-cluster:/var/lib/etcd \ --p 4001:4001 \ --p 2380:2380 \ --p 2379:2379 \ ---name etcd \ -gcr.io/google_containers/etcd-amd64:3.0.17 \ -etcd --name=etcd0 \ ---advertise-client-urls=http://192.168.60.71:2379,http://192.168.60.71:4001 \ ---listen-client-urls=http://0.0.0.0:2379,http://0.0.0.0:4001 \ ---initial-advertise-peer-urls=http://192.168.60.71:2380 \ ---listen-peer-urls=http://0.0.0.0:2380 \ ---initial-cluster-token=9477af68bbee1b9ae037d6fd9e7efefd \ ---initial-cluster=etcd0=http://192.168.60.71:2380,etcd1=http://192.168.60.72:2380,etcd2=http://192.168.60.73:2380 \ ---initial-cluster-state=new \ ---auto-tls \ ---peer-auto-tls \ ---data-dir=/var/lib/etcd -``` - -* 在k8s-master2节点上以docker方式启动etcd集群 - -``` -$ docker stop etcd && docker rm etcd -$ rm -rf /var/lib/etcd-cluster -$ mkdir -p /var/lib/etcd-cluster -$ docker run -d \ ---restart always \ --v /etc/ssl/certs:/etc/ssl/certs \ --v /var/lib/etcd-cluster:/var/lib/etcd \ --p 4001:4001 \ --p 2380:2380 \ --p 2379:2379 \ ---name etcd \ -gcr.io/google_containers/etcd-amd64:3.0.17 \ -etcd --name=etcd1 \ ---advertise-client-urls=http://192.168.60.72:2379,http://192.168.60.72:4001 \ ---listen-client-urls=http://0.0.0.0:2379,http://0.0.0.0:4001 \ ---initial-advertise-peer-urls=http://192.168.60.72:2380 \ ---listen-peer-urls=http://0.0.0.0:2380 \ ---initial-cluster-token=9477af68bbee1b9ae037d6fd9e7efefd \ ---initial-cluster=etcd0=http://192.168.60.71:2380,etcd1=http://192.168.60.72:2380,etcd2=http://192.168.60.73:2380 \ ---initial-cluster-state=new \ ---auto-tls \ ---peer-auto-tls \ ---data-dir=/var/lib/etcd -``` - -* 在k8s-master3节点上以docker方式启动etcd集群 - -``` -$ docker stop etcd && docker rm etcd -$ rm -rf /var/lib/etcd-cluster -$ mkdir -p /var/lib/etcd-cluster -$ docker run -d \ ---restart always \ --v /etc/ssl/certs:/etc/ssl/certs \ --v /var/lib/etcd-cluster:/var/lib/etcd \ --p 4001:4001 \ --p 2380:2380 \ --p 2379:2379 \ ---name etcd \ -gcr.io/google_containers/etcd-amd64:3.0.17 \ -etcd --name=etcd2 \ ---advertise-client-urls=http://192.168.60.73:2379,http://192.168.60.73:4001 \ ---listen-client-urls=http://0.0.0.0:2379,http://0.0.0.0:4001 \ ---initial-advertise-peer-urls=http://192.168.60.73:2380 \ ---listen-peer-urls=http://0.0.0.0:2380 \ ---initial-cluster-token=9477af68bbee1b9ae037d6fd9e7efefd \ ---initial-cluster=etcd0=http://192.168.60.71:2380,etcd1=http://192.168.60.72:2380,etcd2=http://192.168.60.73:2380 \ ---initial-cluster-state=new \ ---auto-tls \ ---peer-auto-tls \ ---data-dir=/var/lib/etcd -``` - -* 在k8s-master1、k8s-master2、k8s-master3上检查etcd启动状态 - -``` -$ docker exec -ti etcd ash - -$ etcdctl member list -1a32c2d3f1abcad0: name=etcd2 peerURLs=http://192.168.60.73:2380 clientURLs=http://192.168.60.73:2379,http://192.168.60.73:4001 isLeader=false -1da4f4e8b839cb79: name=etcd1 peerURLs=http://192.168.60.72:2380 clientURLs=http://192.168.60.72:2379,http://192.168.60.72:4001 isLeader=false -4238bcb92d7f2617: name=etcd0 peerURLs=http://192.168.60.71:2380 clientURLs=http://192.168.60.71:2379,http://192.168.60.71:4001 isLeader=true - -$ etcdctl cluster-health -member 1a32c2d3f1abcad0 is healthy: got healthy result from http://192.168.60.73:2379 -member 1da4f4e8b839cb79 is healthy: got healthy result from http://192.168.60.72:2379 -member 4238bcb92d7f2617 is healthy: got healthy result from http://192.168.60.71:2379 -cluster is healthy - -$ exit -``` - ---- -[返回目录](#目录) - -#### kubeadm初始化 - -* 在k8s-master1上修改kubeadm-init-v1.7.x.yaml文件,设置etcd.endpoints的${HOST_IP}为k8s-master1、k8s-master2、k8s-master3的IP地址。设置apiServerCertSANs的${HOST_IP}为k8s-master1、k8s-master2、k8s-master3的IP地址,${HOST_NAME}为k8s-master1、k8s-master2、k8s-master3,${VIRTUAL_IP}为keepalived的虚拟IP地址 - -``` -$ vi /root/kubeadm-ha/kubeadm-init-v1.7.x.yaml -apiVersion: kubeadm.k8s.io/v1alpha1 -kind: MasterConfiguration -kubernetesVersion: v1.7.0 -networking: - podSubnet: 10.244.0.0/16 -apiServerCertSANs: -- k8s-master1 -- k8s-master2 -- k8s-master3 -- 192.168.60.71 -- 192.168.60.72 -- 192.168.60.73 -- 192.168.60.80 -etcd: - endpoints: - - http://192.168.60.71:2379 - - http://192.168.60.72:2379 - - http://192.168.60.73:2379 -``` - -* 如果使用kubeadm初始化集群,启动过程可能会卡在以下位置,那么可能是因为cgroup-driver参数与docker的不一致引起 -* [apiclient] Created API client, waiting for the control plane to become ready -* journalctl -t kubelet -S '2017-06-08'查看日志,发现如下错误 -* error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "systemd" -* 需要修改KUBELET_CGROUP_ARGS=--cgroup-driver=systemd为KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs - -``` -$ vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf -#Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd" -Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs" - -$ systemctl daemon-reload && systemctl restart kubelet -``` - -* 在k8s-master1上使用kubeadm初始化kubernetes集群,连接外部etcd集群 - -``` -$ kubeadm init --config=/root/kubeadm-ha/kubeadm-init-v1.7.x.yaml -``` - -* 在k8s-master1上修改kube-apiserver.yaml的admission-control,v1.7.0使用了NodeRestriction等安全检查控制,务必设置成v1.6.x推荐的admission-control配置 - -``` -$ vi /etc/kubernetes/manifests/kube-apiserver.yaml -# - --admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota - - --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds -``` - -* 在k8s-master1上重启docker kubelet服务 - -``` -$ systemctl restart docker kubelet -``` - -* 在k8s-master1上设置kubectl的环境变量KUBECONFIG,连接kubelet - -``` -$ vi ~/.bashrc -export KUBECONFIG=/etc/kubernetes/admin.conf - -$ source ~/.bashrc -``` - ---- -[返回目录](#目录) - -#### flannel网络组件安装 - -* 在k8s-master1上安装flannel pod网络组件,必须安装网络组件,否则kube-dns pod会一直处于ContainerCreating - -``` -$ kubectl create -f /root/kubeadm-ha/kube-flannel -clusterrole "flannel" created -clusterrolebinding "flannel" created -serviceaccount "flannel" created -configmap "kube-flannel-cfg" created -daemonset "kube-flannel-ds" created -``` - -* 在k8s-master1上验证kube-dns成功启动,大概等待3分钟,验证所有pods的状态为Running - -``` -$ kubectl get pods --all-namespaces -o wide -NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE -kube-system kube-apiserver-k8s-master1 1/1 Running 0 3m 192.168.60.71 k8s-master1 -kube-system kube-controller-manager-k8s-master1 1/1 Running 0 3m 192.168.60.71 k8s-master1 -kube-system kube-dns-3913472980-k9mt6 3/3 Running 0 4m 10.244.0.104 k8s-master1 -kube-system kube-flannel-ds-3hhjd 2/2 Running 0 1m 192.168.60.71 k8s-master1 -kube-system kube-proxy-rzq3t 1/1 Running 0 4m 192.168.60.71 k8s-master1 -kube-system kube-scheduler-k8s-master1 1/1 Running 0 3m 192.168.60.71 k8s-master1 -``` - ---- -[返回目录](#目录) - -#### dashboard组件安装 - -* 在k8s-master1上安装dashboard组件 - -``` -$ kubectl create -f /root/kubeadm-ha/kube-dashboard/ -serviceaccount "kubernetes-dashboard" created -clusterrolebinding "kubernetes-dashboard" created -deployment "kubernetes-dashboard" created -service "kubernetes-dashboard" created -``` - -* 在k8s-master1上启动proxy,映射地址到0.0.0.0 - -``` -$ kubectl proxy --address='0.0.0.0' & -``` - -* 在本机MacOSX上访问dashboard地址,验证dashboard成功启动 - -``` -http://k8s-master1:30000 -``` - -![dashboard](images/dashboard.png) - ---- -[返回目录](#目录) - -#### heapster组件安装 - -* 在k8s-master1上允许在master上部署pod,否则heapster会无法部署 - -``` -$ kubectl taint nodes --all node-role.kubernetes.io/master- -node "k8s-master1" tainted -``` - -* 在k8s-master1上安装heapster组件,监控性能 - -``` -$ kubectl create -f /root/kubeadm-ha/kube-heapster -``` - -* 在k8s-master1上重启docker以及kubelet服务,让heapster在dashboard上生效显示 - -``` -$ systemctl restart docker kubelet -``` - -* 在k8s-master上检查pods状态 - -``` -$ kubectl get all --all-namespaces -o wide -NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE -kube-system heapster-783524908-kn6jd 1/1 Running 1 9m 10.244.0.111 k8s-master1 -kube-system kube-apiserver-k8s-master1 1/1 Running 1 15m 192.168.60.71 k8s-master1 -kube-system kube-controller-manager-k8s-master1 1/1 Running 1 15m 192.168.60.71 k8s-master1 -kube-system kube-dns-3913472980-k9mt6 3/3 Running 3 16m 10.244.0.110 k8s-master1 -kube-system kube-flannel-ds-3hhjd 2/2 Running 3 13m 192.168.60.71 k8s-master1 -kube-system kube-proxy-rzq3t 1/1 Running 1 16m 192.168.60.71 k8s-master1 -kube-system kube-scheduler-k8s-master1 1/1 Running 1 15m 192.168.60.71 k8s-master1 -kube-system kubernetes-dashboard-2039414953-d46vw 1/1 Running 1 11m 10.244.0.109 k8s-master1 -kube-system monitoring-grafana-3975459543-8l94z 1/1 Running 1 9m 10.244.0.112 k8s-master1 -kube-system monitoring-influxdb-3480804314-72ltf 1/1 Running 1 9m 10.244.0.113 k8s-master1 -``` - -* 在本机MacOSX上访问dashboard地址,验证heapster成功启动,查看Pods的CPU以及Memory信息是否正常呈现 - -``` -http://k8s-master1:30000 -``` - -![heapster](images/heapster.png) - -* 至此,第一台master成功安装,并已经完成flannel、dashboard、heapster的部署 - ---- -[返回目录](#目录) - -### master集群高可用设置 - -#### 复制配置 - -* 在k8s-master1上把/etc/kubernetes/复制到k8s-master2、k8s-master3 - -``` -scp -r /etc/kubernetes/ k8s-master2:/etc/ -scp -r /etc/kubernetes/ k8s-master3:/etc/ -``` - -* 在k8s-master2、k8s-master3上重启kubelet服务,并检查kubelet服务状态为active (running) - -``` -$ systemctl daemon-reload && systemctl restart kubelet - -$ systemctl status kubelet -● kubelet.service - kubelet: The Kubernetes Node Agent - Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled) - Drop-In: /etc/systemd/system/kubelet.service.d - └─10-kubeadm.conf - Active: active (running) since Tue 2017-06-27 16:24:22 CST; 1 day 17h ago - Docs: http://kubernetes.io/docs/ - Main PID: 2780 (kubelet) - Memory: 92.9M - CGroup: /system.slice/kubelet.service - ├─2780 /usr/bin/kubelet --kubeconfig=/etc/kubernetes/kubelet.conf --require-... - └─2811 journalctl -k -f -``` - -* 在k8s-master2、k8s-master3上设置kubectl的环境变量KUBECONFIG,连接kubelet - -``` -$ vi ~/.bashrc -export KUBECONFIG=/etc/kubernetes/admin.conf - -$ source ~/.bashrc -``` - -* 在k8s-master2、k8s-master3检测节点状态,发现节点已经加进来 - -``` -$ kubectl get nodes -o wide -NAME STATUS AGE VERSION EXTERNAL-IP OS-IMAGE KERNEL-VERSION -k8s-master1 Ready 26m v1.7.0 CentOS Linux 7 (Core) 3.10.0-514.6.1.el7.x86_64 -k8s-master2 Ready 2m v1.7.0 CentOS Linux 7 (Core) 3.10.0-514.21.1.el7.x86_64 -k8s-master3 Ready 2m v1.7.0 CentOS Linux 7 (Core) 3.10.0-514.21.1.el7.x86_64 -``` - ---- -[返回目录](#目录) - -#### 修改配置 - -* 在k8s-master2、k8s-master3上修改kube-apiserver.yaml的配置,${HOST_IP}改为本机IP - -``` -$ vi /etc/kubernetes/manifests/kube-apiserver.yaml - - --advertise-address=${HOST_IP} -``` - -* 在k8s-master2和k8s-master3上的修改kubelet.conf设置,${HOST_IP}改为本机IP - -``` -$ vi /etc/kubernetes/kubelet.conf -server: https://${HOST_IP}:6443 -``` - -* 在k8s-master2和k8s-master3上修改admin.conf,${HOST_IP}修改为本机IP地址 - -``` -$ vi /etc/kubernetes/admin.conf - server: https://${HOST_IP}:6443 -``` - -* 在k8s-master2和k8s-master3上修改controller-manager.conf,${HOST_IP}修改为本机IP地址 - -``` -$ vi /etc/kubernetes/controller-manager.conf - server: https://${HOST_IP}:6443 -``` - -* 在k8s-master2和k8s-master3上修改scheduler.conf,${HOST_IP}修改为本机IP地址 - -``` -$ vi /etc/kubernetes/scheduler.conf - server: https://${HOST_IP}:6443 -``` - -* 在k8s-master1、k8s-master2、k8s-master3上重启所有服务 - -``` -$ systemctl daemon-reload && systemctl restart docker kubelet -``` - ---- -[返回目录](#目录) - -#### 验证高可用安装 - -* 在k8s-master1、k8s-master2、k8s-master3任意节点上检测服务启动情况,发现apiserver、controller-manager、kube-scheduler、proxy、flannel已经在k8s-master1、k8s-master2、k8s-master3成功启动 - -``` -$ kubectl get pod --all-namespaces -o wide | grep k8s-master2 -kube-system kube-apiserver-k8s-master2 1/1 Running 1 55s 192.168.60.72 k8s-master2 -kube-system kube-controller-manager-k8s-master2 1/1 Running 2 18m 192.168.60.72 k8s-master2 -kube-system kube-flannel-ds-t8gkh 2/2 Running 4 18m 192.168.60.72 k8s-master2 -kube-system kube-proxy-bpgqw 1/1 Running 1 18m 192.168.60.72 k8s-master2 -kube-system kube-scheduler-k8s-master2 1/1 Running 2 18m 192.168.60.72 k8s-master2 - -$ kubectl get pod --all-namespaces -o wide | grep k8s-master3 -kube-system kube-apiserver-k8s-master3 1/1 Running 1 1m 192.168.60.73 k8s-master3 -kube-system kube-controller-manager-k8s-master3 1/1 Running 2 18m 192.168.60.73 k8s-master3 -kube-system kube-flannel-ds-tmqmx 2/2 Running 4 18m 192.168.60.73 k8s-master3 -kube-system kube-proxy-4stg3 1/1 Running 1 18m 192.168.60.73 k8s-master3 -kube-system kube-scheduler-k8s-master3 1/1 Running 2 18m 192.168.60.73 k8s-master3 -``` - -* 在k8s-master1、k8s-master2、k8s-master3任意节点上通过kubectl logs检查各个controller-manager和scheduler的leader election结果,可以发现只有一个节点有效表示选举正常 - -``` -$ kubectl logs -n kube-system kube-controller-manager-k8s-master1 -$ kubectl logs -n kube-system kube-controller-manager-k8s-master2 -$ kubectl logs -n kube-system kube-controller-manager-k8s-master3 - -$ kubectl logs -n kube-system kube-scheduler-k8s-master1 -$ kubectl logs -n kube-system kube-scheduler-k8s-master2 -$ kubectl logs -n kube-system kube-scheduler-k8s-master3 -``` - -* 在k8s-master1、k8s-master2、k8s-master3任意节点上查看deployment的情况 - -``` -$ kubectl get deploy --all-namespaces -NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE -kube-system heapster 1 1 1 1 41m -kube-system kube-dns 1 1 1 1 48m -kube-system kubernetes-dashboard 1 1 1 1 43m -kube-system monitoring-grafana 1 1 1 1 41m -kube-system monitoring-influxdb 1 1 1 1 41m -``` - -* 在k8s-master1、k8s-master2、k8s-master3任意节点上把kubernetes-dashboard、kube-dns、 scale up成replicas=3,保证各个master节点上都有运行 - -``` -$ kubectl scale --replicas=3 -n kube-system deployment/kube-dns -$ kubectl get pods --all-namespaces -o wide| grep kube-dns - -$ kubectl scale --replicas=3 -n kube-system deployment/kubernetes-dashboard -$ kubectl get pods --all-namespaces -o wide| grep kubernetes-dashboard - -$ kubectl scale --replicas=3 -n kube-system deployment/heapster -$ kubectl get pods --all-namespaces -o wide| grep heapster - -$ kubectl scale --replicas=3 -n kube-system deployment/monitoring-grafana -$ kubectl get pods --all-namespaces -o wide| grep monitoring-grafana - -$ kubectl scale --replicas=3 -n kube-system deployment/monitoring-influxdb -$ kubectl get pods --all-namespaces -o wide| grep monitoring-influxdb -``` - ---- -[返回目录](#目录) - -#### keepalived安装配置 - -* 在k8s-master、k8s-master2、k8s-master3上安装keepalived - -``` -$ yum install -y keepalived - -$ systemctl enable keepalived && systemctl restart keepalived -``` - -* 在k8s-master1、k8s-master2、k8s-master3上备份keepalived配置文件 - -``` -$ mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak -``` - -* 在k8s-master1、k8s-master2、k8s-master3上设置apiserver监控脚本,当apiserver检测失败的时候关闭keepalived服务,转移虚拟IP地址 - -``` -$ vi /etc/keepalived/check_apiserver.sh -#!/bin/bash -err=0 -for k in $( seq 1 10 ) -do - check_code=$(ps -ef|grep kube-apiserver | wc -l) - if [ "$check_code" = "1" ]; then - err=$(expr $err + 1) - sleep 5 - continue - else - err=0 - break - fi -done -if [ "$err" != "0" ]; then - echo "systemctl stop keepalived" - /usr/bin/systemctl stop keepalived - exit 1 -else - exit 0 -fi - -chmod a+x /etc/keepalived/check_apiserver.sh -``` - -* 在k8s-master1、k8s-master2、k8s-master3上查看接口名字 - -``` -$ ip a | grep 192.168.60 -``` - -* 在k8s-master1、k8s-master2、k8s-master3上设置keepalived,参数说明如下: -* state ${STATE}:为MASTER或者BACKUP,只能有一个MASTER -* interface ${INTERFACE_NAME}:为本机的需要绑定的接口名字(通过上边的```ip a```命令查看) -* mcast_src_ip ${HOST_IP}:为本机的IP地址 -* priority ${PRIORITY}:为优先级,例如102、101、100,优先级越高越容易选择为MASTER,优先级不能一样 -* ${VIRTUAL_IP}:为虚拟的IP地址,这里设置为192.168.60.80 - -``` -$ vi /etc/keepalived/keepalived.conf -! Configuration File for keepalived -global_defs { - router_id LVS_DEVEL -} -vrrp_script chk_apiserver { - script "/etc/keepalived/check_apiserver.sh" - interval 2 - weight -5 - fall 3 - rise 2 -} -vrrp_instance VI_1 { - state ${STATE} - interface ${INTERFACE_NAME} - mcast_src_ip ${HOST_IP} - virtual_router_id 51 - priority ${PRIORITY} - advert_int 2 - authentication { - auth_type PASS - auth_pass 4be37dc3b4c90194d1600c483e10ad1d - } - virtual_ipaddress { - ${VIRTUAL_IP} - } - track_script { - chk_apiserver - } -} -``` - -* 在k8s-master1、k8s-master2、k8s-master3上重启keepalived服务,检测虚拟IP地址是否生效 - -``` -$ systemctl restart keepalived -$ ping 192.168.60.80 -``` - ---- -[返回目录](#目录) - -#### nginx负载均衡配置 - -* 在k8s-master1、k8s-master2、k8s-master3上修改nginx-default.conf设置,${HOST_IP}对应k8s-master1、k8s-master2、k8s-master3的地址。通过nginx把访问apiserver的6443端口负载均衡到8433端口上 - -``` -$ vi /root/kubeadm-ha/nginx-default.conf -stream { - upstream apiserver { - server ${HOST_IP}:6443 weight=5 max_fails=3 fail_timeout=30s; - server ${HOST_IP}:6443 weight=5 max_fails=3 fail_timeout=30s; - server ${HOST_IP}:6443 weight=5 max_fails=3 fail_timeout=30s; - } - - server { - listen 8443; - proxy_connect_timeout 1s; - proxy_timeout 3s; - proxy_pass apiserver; - } -} -``` - -* 在k8s-master1、k8s-master2、k8s-master3上启动nginx容器 - -``` -$ docker run -d -p 8443:8443 \ ---name nginx-lb \ ---restart always \ --v /root/kubeadm-ha/nginx-default.conf:/etc/nginx/nginx.conf \ -nginx -``` - -* 在k8s-master1、k8s-master2、k8s-master3上检测keepalived服务的虚拟IP地址指向 - -``` -$ curl -L 192.168.60.80:8443 | wc -l - % Total % Received % Xferd Average Speed Time Time Time Current - Dload Upload Total Spent Left Speed -100 14 0 14 0 0 18324 0 --:--:-- --:--:-- --:--:-- 14000 -1 -``` - -* 业务恢复后务必重启keepalived,否则keepalived会处于关闭状态 - -``` -$ systemctl restart keepalived -``` - -* 在k8s-master1、k8s-master2、k8s-master3上查看keeplived日志,有以下输出表示当前虚拟IP地址绑定的主机 - -``` -$ systemctl status keepalived -l -VRRP_Instance(VI_1) Sending gratuitous ARPs on ens160 for 192.168.60.80 -``` - ---- -[返回目录](#目录) - -#### kube-proxy配置 - -* 在k8s-master1上设置kube-proxy使用keepalived的虚拟IP地址,避免k8s-master1异常的时候所有节点的kube-proxy连接不上 - -``` -$ kubectl get -n kube-system configmap -NAME DATA AGE -extension-apiserver-authentication 6 4h -kube-flannel-cfg 2 4h -kube-proxy 1 4h -``` - -* 在k8s-master1上修改configmap/kube-proxy的server指向keepalived的虚拟IP地址 - -``` -$ kubectl edit -n kube-system configmap/kube-proxy - server: https://192.168.60.80:8443 -``` - -* 在k8s-master1上查看configmap/kube-proxy设置情况 - -``` -$ kubectl get -n kube-system configmap/kube-proxy -o yaml -``` - -* 在k8s-master1上删除所有kube-proxy的pod,让proxy重建 - -``` -kubectl get pods --all-namespaces -o wide | grep proxy -``` - -* 在k8s-master1、k8s-master2、k8s-master3上重启docker kubelet keepalived服务 - -``` -$ systemctl restart docker kubelet keepalived -``` - ---- -[返回目录](#目录) - -#### 验证master集群高可用 - -* 在k8s-master1上检查各个节点pod的启动状态,每个上都成功启动heapster、kube-apiserver、kube-controller-manager、kube-dns、kube-flannel、kube-proxy、kube-scheduler、kubernetes-dashboard、monitoring-grafana、monitoring-influxdb。并且所有pod都处于Running状态表示正常 - -``` -$ kubectl get pods --all-namespaces -o wide | grep k8s-master1 - -$ kubectl get pods --all-namespaces -o wide | grep k8s-master2 - -$ kubectl get pods --all-namespaces -o wide | grep k8s-master3 -``` - ---- -[返回目录](#目录) - -### node节点加入高可用集群设置 - -#### kubeadm加入高可用集群 -* 在k8s-master1上禁止在所有master节点上发布应用 - -``` -$ kubectl patch node k8s-master1 -p '{"spec":{"unschedulable":true}}' - -$ kubectl patch node k8s-master2 -p '{"spec":{"unschedulable":true}}' - -$ kubectl patch node k8s-master3 -p '{"spec":{"unschedulable":true}}' -``` - -* 在k8s-master1上查看集群的token - -``` -$ kubeadm token list -TOKEN TTL EXPIRES USAGES DESCRIPTION -xxxxxx.yyyyyy authentication,signing The default bootstrap token generated by 'kubeadm init' -``` - -* 在k8s-node1 ~ k8s-node8上,${TOKEN}为k8s-master1上显示的token,${VIRTUAL_IP}为keepalived的虚拟IP地址192.168.60.80 - -``` -$ kubeadm join --token ${TOKEN} ${VIRTUAL_IP}:8443 -``` - ---- -[返回目录](#目录) - -#### 部署应用验证集群 - -* 在k8s-node1 ~ k8s-node8上查看kubelet状态,kubelet状态为active (running)表示kubelet服务正常启动 - -``` -$ systemctl status kubelet -● kubelet.service - kubelet: The Kubernetes Node Agent - Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled) - Drop-In: /etc/systemd/system/kubelet.service.d - └─10-kubeadm.conf - Active: active (running) since Tue 2017-06-27 16:23:43 CST; 1 day 18h ago - Docs: http://kubernetes.io/docs/ - Main PID: 1146 (kubelet) - Memory: 204.9M - CGroup: /system.slice/kubelet.service - ├─ 1146 /usr/bin/kubelet --kubeconfig=/etc/kubernetes/kubelet.conf --require... - ├─ 2553 journalctl -k -f - ├─ 4988 /usr/sbin/glusterfs --log-level=ERROR --log-file=/var/lib/kubelet/pl... - └─14720 /usr/sbin/glusterfs --log-level=ERROR --log-file=/var/lib/kubelet/pl... -``` - -* 在k8s-master1上检查各个节点状态,发现所有k8s-nodes节点成功加入 - -``` -$ kubectl get nodes -o wide -NAME STATUS AGE VERSION -k8s-master1 Ready,SchedulingDisabled 5h v1.7.0 -k8s-master2 Ready,SchedulingDisabled 4h v1.7.0 -k8s-master3 Ready,SchedulingDisabled 4h v1.7.0 -k8s-node1 Ready 6m v1.7.0 -k8s-node2 Ready 4m v1.7.0 -k8s-node3 Ready 4m v1.7.0 -k8s-node4 Ready 3m v1.7.0 -k8s-node5 Ready 3m v1.7.0 -k8s-node6 Ready 3m v1.7.0 -k8s-node7 Ready 3m v1.7.0 -k8s-node8 Ready 3m v1.7.0 -``` - -* 在k8s-master1上测试部署nginx服务,nginx服务成功部署到k8s-node5上 - -``` -$ kubectl run nginx --image=nginx --port=80 -deployment "nginx" created - -$ kubectl get pod -o wide -l=run=nginx -NAME READY STATUS RESTARTS AGE IP NODE -nginx-2662403697-pbmwt 1/1 Running 0 5m 10.244.7.6 k8s-node5 -``` - -* 在k8s-master1让nginx服务外部可见 - -``` -$ kubectl expose deployment nginx --port=80 --target-port=80 --type=NodePort -service "nginx" exposed - -$ kubectl get svc -l=run=nginx -NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE -nginx 10.105.151.69 80:31639/TCP 43s - -$ curl k8s-master2:31639 - - - -Welcome to nginx! - - - -

Welcome to nginx!

-

If you see this page, the nginx web server is successfully installed and -working. Further configuration is required.

- -

For online documentation and support please refer to -nginx.org.
-Commercial support is available at -nginx.com.

- -

Thank you for using nginx.

- - - -``` - -* 至此,kubernetes高可用集群成功部署 😀 ---- -[返回目录](#目录) - diff --git a/v1.7/images/dashboard.png b/v1.7/images/dashboard.png deleted file mode 100644 index f86f497..0000000 Binary files a/v1.7/images/dashboard.png and /dev/null differ diff --git a/v1.7/images/heapster.png b/v1.7/images/heapster.png deleted file mode 100644 index e7d320a..0000000 Binary files a/v1.7/images/heapster.png and /dev/null differ diff --git a/v1.7/kube-dashboard/kubernetes-dashboard-1.6.1.yaml b/v1.7/kube-dashboard/kubernetes-dashboard-1.6.1.yaml deleted file mode 100644 index ca8df50..0000000 --- a/v1.7/kube-dashboard/kubernetes-dashboard-1.6.1.yaml +++ /dev/null @@ -1,98 +0,0 @@ -# Copyright 2015 Google Inc. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# Configuration to deploy release version of the Dashboard UI compatible with -# Kubernetes 1.6 (RBAC enabled). -# -# Example usage: kubectl create -f - -apiVersion: v1 -kind: ServiceAccount -metadata: - labels: - k8s-app: kubernetes-dashboard - name: kubernetes-dashboard - namespace: kube-system ---- -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRoleBinding -metadata: - name: kubernetes-dashboard - labels: - k8s-app: kubernetes-dashboard -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: cluster-admin -subjects: -- kind: ServiceAccount - name: kubernetes-dashboard - namespace: kube-system ---- -kind: Deployment -apiVersion: extensions/v1beta1 -metadata: - labels: - k8s-app: kubernetes-dashboard - name: kubernetes-dashboard - namespace: kube-system -spec: - replicas: 1 - revisionHistoryLimit: 10 - selector: - matchLabels: - k8s-app: kubernetes-dashboard - template: - metadata: - labels: - k8s-app: kubernetes-dashboard - spec: - containers: - - name: kubernetes-dashboard - image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.1 - ports: - - containerPort: 9090 - protocol: TCP - args: - # Uncomment the following line to manually specify Kubernetes API server Host - # If not specified, Dashboard will attempt to auto discover the API server and connect - # to it. Uncomment only if the default does not work. - # - --apiserver-host=http://my-address:port - livenessProbe: - httpGet: - path: / - port: 9090 - initialDelaySeconds: 30 - timeoutSeconds: 30 - serviceAccountName: kubernetes-dashboard - # Comment the following tolerations if Dashboard must not be deployed on master - tolerations: - - key: node-role.kubernetes.io/master - effect: NoSchedule ---- -kind: Service -apiVersion: v1 -metadata: - labels: - k8s-app: kubernetes-dashboard - name: kubernetes-dashboard - namespace: kube-system -spec: - type: NodePort - ports: - - port: 80 - targetPort: 9090 - nodePort: 30000 - selector: - k8s-app: kubernetes-dashboard diff --git a/v1.7/kube-flannel/step1-kube-flannel-rbac-v0.7.1.yml b/v1.7/kube-flannel/step1-kube-flannel-rbac-v0.7.1.yml deleted file mode 100644 index d66465c..0000000 --- a/v1.7/kube-flannel/step1-kube-flannel-rbac-v0.7.1.yml +++ /dev/null @@ -1,42 +0,0 @@ -# Create the clusterrole and clusterrolebinding: -# $ kubectl create -f kube-flannel-rbac.yml -# Create the pod using the same namespace used by the flannel serviceaccount: -# $ kubectl create --namespace kube-system -f kube-flannel.yml ---- -kind: ClusterRole -apiVersion: rbac.authorization.k8s.io/v1beta1 -metadata: - name: flannel -rules: - - apiGroups: - - "" - resources: - - pods - verbs: - - get - - apiGroups: - - "" - resources: - - nodes - verbs: - - list - - watch - - apiGroups: - - "" - resources: - - nodes/status - verbs: - - patch ---- -kind: ClusterRoleBinding -apiVersion: rbac.authorization.k8s.io/v1beta1 -metadata: - name: flannel -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: flannel -subjects: -- kind: ServiceAccount - name: flannel - namespace: kube-system diff --git a/v1.7/kube-flannel/step2-kube-flannel-v0.7.1.yml b/v1.7/kube-flannel/step2-kube-flannel-v0.7.1.yml deleted file mode 100644 index 09dfe53..0000000 --- a/v1.7/kube-flannel/step2-kube-flannel-v0.7.1.yml +++ /dev/null @@ -1,93 +0,0 @@ ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: flannel - namespace: kube-system ---- -kind: ConfigMap -apiVersion: v1 -metadata: - name: kube-flannel-cfg - namespace: kube-system - labels: - tier: node - app: flannel -data: - cni-conf.json: | - { - "name": "cbr0", - "type": "flannel", - "delegate": { - "isDefaultGateway": true - } - } - net-conf.json: | - { - "Network": "10.244.0.0/16", - "Backend": { - "Type": "vxlan" - } - } ---- -apiVersion: extensions/v1beta1 -kind: DaemonSet -metadata: - name: kube-flannel-ds - namespace: kube-system - labels: - tier: node - app: flannel -spec: - template: - metadata: - labels: - tier: node - app: flannel - spec: - hostNetwork: true - nodeSelector: - beta.kubernetes.io/arch: amd64 - tolerations: - - key: node-role.kubernetes.io/master - operator: Exists - effect: NoSchedule - serviceAccountName: flannel - containers: - - name: kube-flannel - image: quay.io/coreos/flannel:v0.7.1-amd64 - command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr" ] - securityContext: - privileged: true - env: - - name: POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: POD_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - volumeMounts: - - name: run - mountPath: /run - - name: flannel-cfg - mountPath: /etc/kube-flannel/ - - name: install-cni - image: quay.io/coreos/flannel:v0.7.1-amd64 - command: [ "/bin/sh", "-c", "set -e -x; cp -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conf; while true; do sleep 3600; done" ] - volumeMounts: - - name: cni - mountPath: /etc/cni/net.d - - name: flannel-cfg - mountPath: /etc/kube-flannel/ - volumes: - - name: run - hostPath: - path: /run - - name: cni - hostPath: - path: /etc/cni/net.d - - name: flannel-cfg - configMap: - name: kube-flannel-cfg diff --git a/v1.7/kube-heapster/grafana.yaml b/v1.7/kube-heapster/grafana.yaml deleted file mode 100644 index 4bdce05..0000000 --- a/v1.7/kube-heapster/grafana.yaml +++ /dev/null @@ -1,66 +0,0 @@ -apiVersion: extensions/v1beta1 -kind: Deployment -metadata: - name: monitoring-grafana - namespace: kube-system -spec: - replicas: 1 - template: - metadata: - labels: - task: monitoring - k8s-app: grafana - spec: - containers: - - name: grafana - image: gcr.io/google_containers/heapster-grafana-amd64:v4.0.2 - ports: - - containerPort: 3000 - protocol: TCP - volumeMounts: - - mountPath: /var - name: grafana-storage - env: - - name: INFLUXDB_HOST - value: monitoring-influxdb - - name: GRAFANA_PORT - value: "3000" - # The following env variables are required to make Grafana accessible via - # the kubernetes api-server proxy. On production clusters, we recommend - # removing these env variables, setup auth for grafana, and expose the grafana - # service using a LoadBalancer or a public IP. - - name: GF_AUTH_BASIC_ENABLED - value: "false" - - name: GF_AUTH_ANONYMOUS_ENABLED - value: "true" - - name: GF_AUTH_ANONYMOUS_ORG_ROLE - value: Admin - - name: GF_SERVER_ROOT_URL - # If you're only using the API Server proxy, set this value instead: - # value: /api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/ - value: / - volumes: - - name: grafana-storage - emptyDir: {} ---- -apiVersion: v1 -kind: Service -metadata: - labels: - # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons) - # If you are NOT using this as an addon, you should comment out this line. - kubernetes.io/cluster-service: 'true' - kubernetes.io/name: monitoring-grafana - name: monitoring-grafana - namespace: kube-system -spec: - # In a production setup, we recommend accessing Grafana through an external Loadbalancer - # or through a public IP. - # type: LoadBalancer - # You could also use NodePort to expose the service at a randomly-generated port - # type: NodePort - ports: - - port: 80 - targetPort: 3000 - selector: - k8s-app: grafana diff --git a/v1.7/kube-heapster/heapster-rbac.yaml b/v1.7/kube-heapster/heapster-rbac.yaml deleted file mode 100644 index 74df610..0000000 --- a/v1.7/kube-heapster/heapster-rbac.yaml +++ /dev/null @@ -1,67 +0,0 @@ -apiVersion: v1 -kind: ServiceAccount -metadata: - name: heapster - namespace: kube-system ---- -kind: ClusterRoleBinding -apiVersion: rbac.authorization.k8s.io/v1beta1 -metadata: - name: heapster -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: system:heapster -subjects: -- kind: ServiceAccount - name: heapster - namespace: kube-system ---- -apiVersion: apps/v1beta1 -kind: Deployment -metadata: - name: heapster - labels: - k8s-app: heapster - task: monitoring - namespace: kube-system -spec: - replicas: 1 - template: - metadata: - labels: - k8s-app: heapster - task: monitoring - spec: - tolerations: - - key: beta.kubernetes.io/arch - value: arm - effect: NoSchedule - - key: beta.kubernetes.io/arch - value: arm64 - effect: NoSchedule - serviceAccountName: heapster - containers: - - name: heapster - image: gcr.io/google_containers/heapster-amd64:v1.3.0 - command: - - /heapster - - --source=kubernetes:https://kubernetes.default - - --sink=influxdb:http://monitoring-influxdb.kube-system.svc:8086 ---- -apiVersion: v1 -kind: Service -metadata: - labels: - task: monitoring - k8s-app: heapster - kubernetes.io/cluster-service: "true" - kubernetes.io/name: Heapster - name: heapster - namespace: kube-system -spec: - ports: - - port: 80 - targetPort: 8082 - selector: - k8s-app: heapster diff --git a/v1.7/kube-heapster/influxdb.yaml b/v1.7/kube-heapster/influxdb.yaml deleted file mode 100644 index 9afdf55..0000000 --- a/v1.7/kube-heapster/influxdb.yaml +++ /dev/null @@ -1,40 +0,0 @@ -apiVersion: extensions/v1beta1 -kind: Deployment -metadata: - name: monitoring-influxdb - namespace: kube-system -spec: - replicas: 1 - template: - metadata: - labels: - task: monitoring - k8s-app: influxdb - spec: - containers: - - name: influxdb - image: gcr.io/google_containers/heapster-influxdb-amd64:v1.1.1 - volumeMounts: - - mountPath: /data - name: influxdb-storage - volumes: - - name: influxdb-storage - emptyDir: {} ---- -apiVersion: v1 -kind: Service -metadata: - labels: - task: monitoring - # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons) - # If you are NOT using this as an addon, you should comment out this line. - kubernetes.io/cluster-service: 'true' - kubernetes.io/name: monitoring-influxdb - name: monitoring-influxdb - namespace: kube-system -spec: - ports: - - port: 8086 - targetPort: 8086 - selector: - k8s-app: influxdb diff --git a/v1.7/kubeadm-init-v1.7.x.yaml b/v1.7/kubeadm-init-v1.7.x.yaml deleted file mode 100644 index a780491..0000000 --- a/v1.7/kubeadm-init-v1.7.x.yaml +++ /dev/null @@ -1,18 +0,0 @@ -apiVersion: kubeadm.k8s.io/v1alpha1 -kind: MasterConfiguration -kubernetesVersion: v1.7.0 -networking: - podSubnet: 10.244.0.0/16 -apiServerCertSANs: -- ${HOST_NAME} -- ${HOST_NAME} -- ${HOST_NAME} -- ${HOST_IP} -- ${HOST_IP} -- ${HOST_IP} -- ${VIRTUAL_IP} -etcd: - endpoints: - - http://${HOST_IP}:2379 - - http://${HOST_IP}:2379 - - http://${HOST_IP}:2379 diff --git a/v1.7/nginx-default.conf b/v1.7/nginx-default.conf deleted file mode 100644 index b330c0b..0000000 --- a/v1.7/nginx-default.conf +++ /dev/null @@ -1,47 +0,0 @@ - -user nginx; -worker_processes 1; - -error_log /var/log/nginx/error.log warn; -pid /var/run/nginx.pid; - - -events { - worker_connections 1024; -} - - -http { - include /etc/nginx/mime.types; - default_type application/octet-stream; - - log_format main '$remote_addr - $remote_user [$time_local] "$request" ' - '$status $body_bytes_sent "$http_referer" ' - '"$http_user_agent" "$http_x_forwarded_for"'; - - access_log /var/log/nginx/access.log main; - - sendfile on; - #tcp_nopush on; - - keepalive_timeout 65; - - #gzip on; - - include /etc/nginx/conf.d/*.conf; -} - -stream { - upstream apiserver { - server ${HOST_IP}:6443 weight=5 max_fails=3 fail_timeout=30s; - server ${HOST_IP}:6443 weight=5 max_fails=3 fail_timeout=30s; - server ${HOST_IP}:6443 weight=5 max_fails=3 fail_timeout=30s; - } - - server { - listen 8443; - proxy_connect_timeout 1s; - proxy_timeout 3s; - proxy_pass apiserver; - } -} diff --git a/v1.9/README.md b/v1.9/README.md deleted file mode 100644 index 7c4a8cb..0000000 --- a/v1.9/README.md +++ /dev/null @@ -1,1028 +0,0 @@ -# kubeadm-highavailiability - kubernetes high availiability deployment based on kubeadm, for Kubernetes version v1.11.x/v1.9.x/v1.7.x/v1.6.x - -![k8s logo](../images/Kubernetes.png) - -- [中文文档(for v1.11.x版本)](../README_CN.md) -- [English document(for v1.11.x version)](../README.md) -- [中文文档(for v1.9.x版本)](../v1.9/README_CN.md) -- [English document(for v1.9.x version)](../v1.9/README.md) -- [中文文档(for v1.7.x版本)](../v1.7/README_CN.md) -- [English document(for v1.7.x version)](../v1.7/README.md) -- [中文文档(for v1.6.x版本)](../v1.6/README_CN.md) -- [English document(for v1.6.x version)](../v1.6/README.md) - ---- - -- [GitHub project URL](https://github.com/cookeem/kubeadm-ha/) -- [OSChina project URL](https://git.oschina.net/cookeem/kubeadm-ha/) - ---- - -- This operation instruction is for version v1.9.x kubernetes cluster - -> Before v1.9.0 kubeadm still not support high availability deployment, so it's not recommend for production usage. But from v1.9.0, kubeadm support high availability deployment officially, this instruction version for at least v1.9.0. - -### category - -1. [deployment architecture](#deployment-architecture) - 1. [deployment architecture summary](#deployment-architecture-summary) - 1. [detail deployment architecture](#detail-deployment-architecture) - 1. [hosts list](#hosts-list) -1. [prerequisites](#prerequisites) - 1. [version info](#version-info) - 1. [required docker images](#required-docker-images) - 1. [system configuration](#system-configuration) -1. [kubernetes installation](#kubernetes-installation) - 1. [firewalld and iptables settings](#firewalld-and-iptables-settings) - 1. [kubernetes and related services installation](#kubernetes-and-related-services-installation) -1. [configuration files settings](#configuration-files-settings) - 1. [script files settings](#script-files-settings) - 1. [deploy independent etcd cluster](#deploy-independent-etcd-cluster) -1. [use kubeadm to init first master](#use-kubeadm-to-init-first-master) - 1. [kubeadm init](#kubeadm-init) - 1. [basic components installation](#basic-components-installation) -1. [kubernetes masters high avialiability configuration](#kubernetes-masters-high-avialiability-configuration) - 1. [copy configuration files](#copy-configuration-files) - 1. [other master nodes init](#other-master-nodes-init) - 1. [keepalived installation](#keepalived-installation) - 1. [nginx load balancer configuration](#nginx-load-balancer-configuration) - 1. [kube-proxy configuration](#kube-proxy-configuration) -1. [all nodes join the kubernetes cluster](#all-nodes-join-the-kubernetes-cluster) - 1. [use kubeadm to join the cluster](#use-kubeadm-to-join-the-cluster) - 1. [verify kubernetes cluster high availiablity](#verify-kubernetes-cluster-high-availiablity) - -### deployment architecture - -#### deployment architecture summary - -![ha logo](../images/ha.png) - ---- - -[category](#category) - -#### detail deployment architecture - -![k8s ha](../images/k8s-ha.png) - -* kubernetes components: - -> kube-apiserver: exposes the Kubernetes API. It is the front-end for the Kubernetes control plane. It is designed to scale horizontally – that is, it scales by deploying more instances. - -> etcd: is used as Kubernetes’ backing store. All cluster data is stored here. Always have a backup plan for etcd’s data for your Kubernetes cluster. - - -> kube-scheduler: watches newly created pods that have no node assigned, and selects a node for them to run on. - - -> kube-controller-manager: runs controllers, which are the background threads that handle routine tasks in the cluster. Logically, each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process. - -> kubelet: is the primary node agent. It watches for pods that have been assigned to its node (either by apiserver or via local configuration file) - -> kube-proxy: enables the Kubernetes service abstraction by maintaining network rules on the host and performing connection forwarding. - - -* load balancer - -> keepalived cluster config a virtual IP address (192.168.20.10), this virtual IP address point to devops-master01, devops-master02, devops-master03. - -> nginx service as the load balancer of devops-master01, devops-master02, devops-master03's apiserver. The other nodes kubernetes services connect the keepalived virtual ip address (192.168.20.10) and nginx exposed port (16443) to communicate with the master cluster's apiservers. - ---- - -[category](#category) - -#### hosts list - -HostName | IPAddress | Notes | Components -:--- | :--- | :--- | :--- -devops-master01 ~ 03 | 192.168.20.27 ~ 29 | master nodes * 3 | keepalived, nginx, etcd, kubelet, kube-apiserver, kube-scheduler, kube-proxy, kube-dashboard, heapster, calico -N/A | 192.168.20.10 | keepalived virtual IP | N/A -devops-node01 ~ 04 | 192.168.20.17 ~ 20 | worker nodes * 4 | kubelet, kube-proxy - ---- - -[category](#category) - -### prerequisites - -#### version info - -* Linux version: CentOS 7.4.1708 -* Core version: 4.6.4-1.el7.elrepo.x86_64 - -``` -$ cat /etc/redhat-release -CentOS Linux release 7.4.1708 (Core) - -$ uname -r -4.6.4-1.el7.elrepo.x86_64 -``` - -* docker version: 17.12.0-ce-rc2 - -``` -$ docker version -Client: - Version: 17.12.0-ce-rc2 - API version: 1.35 - Go version: go1.9.2 - Git commit: f9cde63 - Built: Tue Dec 12 06:42:20 2017 - OS/Arch: linux/amd64 - -Server: - Engine: - Version: 17.12.0-ce-rc2 - API version: 1.35 (minimum version 1.12) - Go version: go1.9.2 - Git commit: f9cde63 - Built: Tue Dec 12 06:44:50 2017 - OS/Arch: linux/amd64 - Experimental: false -``` - -* kubeadm version: v1.9.3 - -``` -$ kubeadm version -kubeadm version: &version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T11:55:20Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"} -``` - -* kubelet version: v1.9.3 - -``` -$ kubelet --version -Kubernetes v1.9.3 -``` - -* networks add-ons - -> canal (flannel + calico) - ---- - -[category](#category) - -#### required docker images - -``` -# kuberentes basic components -docker pull gcr.io/google_containers/kube-apiserver-amd64:v1.9.3 -docker pull gcr.io/google_containers/kube-proxy-amd64:v1.9.3 -docker pull gcr.io/google_containers/kube-scheduler-amd64:v1.9.3 -docker pull gcr.io/google_containers/kube-controller-manager-amd64:v1.9.3 -docker pull gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.7 -docker pull gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.7 -docker pull gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7 -docker pull gcr.io/google_containers/etcd-amd64:3.1.10 -docker pull gcr.io/google_containers/pause-amd64:3.0 - -# kubernetes networks add ons -docker pull quay.io/coreos/flannel:v0.9.1-amd64 -docker pull quay.io/calico/node:v3.0.3 -docker pull quay.io/calico/kube-controllers:v2.0.1 -docker pull quay.io/calico/cni:v2.0.1 - -# kubernetes dashboard -docker pull gcr.io/google_containers/kubernetes-dashboard-amd64:v1.8.3 - -# kubernetes heapster -docker pull gcr.io/google_containers/heapster-influxdb-amd64:v1.3.3 -docker pull gcr.io/google_containers/heapster-grafana-amd64:v4.4.3 -docker pull gcr.io/google_containers/heapster-amd64:v1.4.2 - -# kubernetes apiserver load balancer -docker pull nginx:latest -``` - ---- - -[category](#category) - -#### system configuration - -* on all kubernetes nodes: add kubernetes' repository - -``` -$ cat < /etc/yum.repos.d/kubernetes.repo -[kubernetes] -name=Kubernetes -baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 -enabled=1 -gpgcheck=1 -repo_gpgcheck=1 -gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg -EOF -``` - -* on all kubernetes nodes: use yum to update system - -``` -$ yum update -y -``` - -* on all kubernetes nodes: set SELINUX to permissive mode - -``` -$ vi /etc/selinux/config -SELINUX=permissive - -$ setenforce 0 -``` - -* on all kubernetes nodes: set iptables parameters - -``` -$ cat < /etc/sysctl.d/k8s.conf -net.bridge.bridge-nf-call-ip6tables = 1 -net.bridge.bridge-nf-call-iptables = 1 -net.ipv4.ip_forward = 1 -EOF - -sysctl --system -``` - -* on all kubernetes nodes: disable swap - -``` -$ swapoff -a - -# disable swap mount point in /etc/fstab -$ vi /etc/fstab -#/dev/mapper/centos-swap swap swap defaults 0 0 - -# check swap is disabled -$ cat /proc/swaps -Filename Type Size Used Priority -``` - -* on all kubernetes nodes: reboot host - -``` -$ reboot -``` - ---- - -[category](#category) - -### kubernetes installation - -#### firewalld and iptables settings - -- master ports list - -Protocol | Direction | Port | Comment -:--- | :--- | :--- | :--- -TCP | Inbound | 16443* | Load balancer Kubernetes API server port -TCP | Inbound | 6443* | Kubernetes API server -TCP | Inbound | 4001 | etcd listen client port -TCP | Inbound | 2379-2380 | etcd server client API -TCP | Inbound | 10250 | Kubelet API -TCP | Inbound | 10251 | kube-scheduler -TCP | Inbound | 10252 | kube-controller-manager -TCP | Inbound | 10255 | Read-only Kubelet API -TCP | Inbound | 30000-32767 | NodePort Services - -- on all kubernetes master nodes: enable relative ports on firewalld (because all these services are deploy by docker, if your docker version is 17.x, is not necessary to set firewalld by commands below, because docker will set iptables automatically and enable relative ports) - -``` -$ systemctl status firewalld - -$ firewall-cmd --zone=public --add-port=16443/tcp --permanent -$ firewall-cmd --zone=public --add-port=6443/tcp --permanent -$ firewall-cmd --zone=public --add-port=4001/tcp --permanent -$ firewall-cmd --zone=public --add-port=2379-2380/tcp --permanent -$ firewall-cmd --zone=public --add-port=10250/tcp --permanent -$ firewall-cmd --zone=public --add-port=10251/tcp --permanent -$ firewall-cmd --zone=public --add-port=10252/tcp --permanent -$ firewall-cmd --zone=public --add-port=10255/tcp --permanent -$ firewall-cmd --zone=public --add-port=30000-32767/tcp --permanent - -$ firewall-cmd --reload - -$ firewall-cmd --list-all --zone=public -public (active) - target: default - icmp-block-inversion: no - interfaces: ens2f1 ens1f0 nm-bond - sources: - services: ssh dhcpv6-client - ports: 4001/tcp 6443/tcp 2379-2380/tcp 10250/tcp 10251/tcp 10252/tcp 10255/tcp 30000-32767/tcp - protocols: - masquerade: no - forward-ports: - source-ports: - icmp-blocks: - rich rules: -``` - -- worker ports list - -Protocol | Direction | Port | Comment -:--- | :--- | :--- | :--- -TCP | Inbound | 10250 | Kubelet API -TCP | Inbound | 10255 | Read-only Kubelet API -TCP | Inbound | 30000-32767 | NodePort Services** - -- on all kubernetes worker nodes: enable relative ports on firewalld (because all these services are deploy by docker, if your docker version is 17.x, is not necessary to set firewalld by commands below, because docker will set iptables automatically and enable relative ports) - -``` -$ systemctl status firewalld - -$ firewall-cmd --zone=public --add-port=10250/tcp --permanent -$ firewall-cmd --zone=public --add-port=10255/tcp --permanent -$ firewall-cmd --zone=public --add-port=30000-32767/tcp --permanent - -$ firewall-cmd --reload - -$ firewall-cmd --list-all --zone=public -public (active) - target: default - icmp-block-inversion: no - interfaces: ens2f1 ens1f0 nm-bond - sources: - services: ssh dhcpv6-client - ports: 10250/tcp 10255/tcp 30000-32767/tcp - protocols: - masquerade: no - forward-ports: - source-ports: - icmp-blocks: - rich rules: -``` - -* on all kubernetes nodes: set firewalld to enable kube-proxy port forward - -``` -$ firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 1 -i docker0 -j ACCEPT -m comment --comment "kube-proxy redirects" -$ firewall-cmd --permanent --direct --add-rule ipv4 filter FORWARD 1 -o docker0 -j ACCEPT -m comment --comment "docker subnet" -$ firewall-cmd --permanent --direct --add-rule ipv4 filter FORWARD 1 -i flannel.1 -j ACCEPT -m comment --comment "flannel subnet" -$ firewall-cmd --permanent --direct --add-rule ipv4 filter FORWARD 1 -o flannel.1 -j ACCEPT -m comment --comment "flannel subnet" -$ firewall-cmd --reload - -$ firewall-cmd --direct --get-all-rules -ipv4 filter INPUT 1 -i docker0 -j ACCEPT -m comment --comment 'kube-proxy redirects' -ipv4 filter FORWARD 1 -o docker0 -j ACCEPT -m comment --comment 'docker subnet' -ipv4 filter FORWARD 1 -i flannel.1 -j ACCEPT -m comment --comment 'flannel subnet' -ipv4 filter FORWARD 1 -o flannel.1 -j ACCEPT -m comment --comment 'flannel subnet' -``` - -- on all kubernetes nodes: remove this iptables chains, this settings will prevent kube-proxy node port forward. ( Notice: please run this command each time you restart firewalld ) - -``` -iptables -D INPUT -j REJECT --reject-with icmp-host-prohibited -``` - ---- - -[category](#category) - -#### kubernetes and related services installation - -* on all kubernetes nodes: check SELINUX mode, it must be permissive mode - -``` -$ getenforce -Permissive -``` - -* on all kubernetes nodes: install kubernetes and related services, then start up kubelet and docker daemon - -``` -$ yum install -y docker-ce-17.12.0.ce-0.2.rc2.el7.centos.x86_64 -$ yum install -y docker-compose-1.9.0-5.el7.noarch -$ systemctl enable docker && systemctl start docker - -$ yum install -y kubelet-1.9.3-0.x86_64 kubeadm-1.9.3-0.x86_64 kubectl-1.9.3-0.x86_64 -$ systemctl enable kubelet && systemctl start kubelet -``` - -* on all kubernetes nodes: set kubelet KUBELET_CGROUP_ARGS parameter the same as docker daemon's settings, here docker daemon and kubelet use cgroupfs as cgroup-driver. - -``` -# by default kubelet use cgroup-driver=systemd, modify it as cgroup-driver=cgroupfs -$ vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf -#Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd" -Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs" - -# reload then restart kubelet service -$ systemctl daemon-reload && systemctl restart kubelet -``` - -* on all kubernetes nodes: install and start keepalived service - -``` -$ yum install -y keepalived -$ systemctl enable keepalived && systemctl restart keepalived -``` - ---- - -[category](#category) - -### configuration files settings - -#### script files settings - -* on all kubernetes master nodes: get the source code, and change the working directory to the source code directory - -``` -$ git clone https://github.com/cookeem/kubeadm-ha - -$ cd kubeadm-ha -``` - -* on all kubernetes master nodes: set the `create-config.sh` file, this script will create all configuration files, follow the setting comment and make sure you set the parameters correctly. - -``` -$ vi create-config.sh - -# local machine ip address -export K8SHA_IPLOCAL=192.168.20.27 - -# local machine etcd name, options: etcd1, etcd2, etcd3 -export K8SHA_ETCDNAME=etcd1 - -# local machine keepalived state config, options: MASTER, BACKUP. One keepalived cluster only one MASTER, other's are BACKUP -export K8SHA_KA_STATE=MASTER - -# local machine keepalived priority config, options: 102, 101, 100. MASTER must 102 -export K8SHA_KA_PRIO=102 - -# local machine keepalived network interface name config, for example: eth0 -export K8SHA_KA_INTF=nm-bond - -####################################### -# all masters settings below must be same -####################################### - -# master keepalived virtual ip address -export K8SHA_IPVIRTUAL=192.168.20.10 - -# master01 ip address -export K8SHA_IP1=192.168.20.27 - -# master02 ip address -export K8SHA_IP2=192.168.20.28 - -# master03 ip address -export K8SHA_IP3=192.168.20.29 - -# master01 hostname -export K8SHA_HOSTNAME1=devops-master01 - -# master02 hostname -export K8SHA_HOSTNAME2=devops-master02 - -# master03 hostname -export K8SHA_HOSTNAME3=devops-master03 - -# keepalived auth_pass config, all masters must be same -export K8SHA_KA_AUTH=4cdf7dc3b4c90194d1600c483e10ad1d - -# kubernetes cluster token, you can use 'kubeadm token generate' to get a new one -export K8SHA_TOKEN=7f276c.0741d82a5337f526 - -# kubernetes CIDR pod subnet, if CIDR pod subnet is "10.244.0.0/16" please set to "10.244.0.0\\/16" -export K8SHA_CIDR=10.244.0.0\\/16 - -# kubernetes CIDR service subnet, if CIDR service subnet is "10.96.0.0/12" please set to "10.96.0.0\\/12" -export K8SHA_SVC_CIDR=10.96.0.0\\/12 - -# calico network settings, set a reachable ip address for the cluster network interface, for example you can use the gateway ip address -export K8SHA_CALICO_REACHABLE_IP=192.168.20.1 -``` - -* on all kubernetes master nodes: run the `create-config.sh` script file and create related configuration files: - -> etcd cluster docker-compose.yaml file - -> keepalived configuration file - -> nginx load balancer docker-compose.yaml file - -> kubeadm init configuration file - -> canal configuration file - -``` -$ ./create-config.sh -set etcd cluster docker-compose.yaml file success: etcd/docker-compose.yaml -set keepalived config file success: /etc/keepalived/keepalived.conf -set nginx load balancer config file success: nginx-lb/nginx-lb.conf -set kubeadm init config file success: kubeadm-init.yaml -set canal deployment config file success: kube-canal/canal.yaml -``` - ---- - -[category](#category) - -#### deploy independent etcd cluster - -* on all kubernetes master nodes: deploy independent etcd cluster (non-TLS mode) - -``` -# reset kubernetes cluster -$ kubeadm reset - -# clear etcd cluster data -$ rm -rf /var/lib/etcd-cluster - -# reset and start etcd cluster -$ docker-compose --file etcd/docker-compose.yaml stop -$ docker-compose --file etcd/docker-compose.yaml rm -f -$ docker-compose --file etcd/docker-compose.yaml up -d - -# check etcd cluster status -$ docker exec -ti etcd etcdctl cluster-health -member 531504c79088f553 is healthy: got healthy result from http://192.168.20.29:2379 -member 56c53113d5e1cfa3 is healthy: got healthy result from http://192.168.20.27:2379 -member 7026e604579e4d64 is healthy: got healthy result from http://192.168.20.28:2379 -cluster is healthy - -$ docker exec -ti etcd etcdctl member list -531504c79088f553: name=etcd3 peerURLs=http://192.168.20.29:2380 clientURLs=http://192.168.20.29:2379,http://192.168.20.29:4001 isLeader=false -56c53113d5e1cfa3: name=etcd1 peerURLs=http://192.168.20.27:2380 clientURLs=http://192.168.20.27:2379,http://192.168.20.27:4001 isLeader=false -7026e604579e4d64: name=etcd2 peerURLs=http://192.168.20.28:2380 clientURLs=http://192.168.20.28:2379,http://192.168.20.28:4001 isLeader=true -``` - ---- - -[category](#category) - -### use kubeadm to init first master - -#### kubeadm init - -* on all kubernetes master nodes: reset cni and docker network - -``` -$ systemctl stop kubelet -$ systemctl stop docker -$ rm -rf /var/lib/cni/ -$ rm -rf /var/lib/kubelet/* -$ rm -rf /etc/cni/ - -$ ip a | grep -E 'docker|flannel|cni' -$ ip link del docker0 -$ ip link del flannel.1 -$ ip link del cni0 - -$ systemctl restart docker && systemctl restart kubelet -$ ip a | grep -E 'docker|flannel|cni' -``` - -* on devops-master01: use kubeadm to init a kubernetes cluster, notice: you must save the following message: kubeadm join --token XXX --discovery-token-ca-cert-hash YYY , this command will use lately. - -``` -$ kubeadm init --config=kubeadm-init.yaml -... - kubeadm join --token 7f276c.0741d82a5337f526 192.168.20.27:6443 --discovery-token-ca-cert-hash sha256:a4a1eaf725a0fc67c3028b3063b92e6af7f2eb0f4ae028f12b3415a6fd2d2a5e -``` - -* on all kubernetes master nodes: set kubectl client environment variable - -``` -$ vi ~/.bashrc -export KUBECONFIG=/etc/kubernetes/admin.conf - -$ source ~/.bashrc -``` - -#### basic components installation - -* on devops-master01: install flannel network add-ons - -``` -# master may not work if no network add-ons -$ kubectl get node -NAME STATUS ROLES AGE VERSION -devops-master01 NotReady master 14s v1.9.3 - -# install canal add-ons -$ kubectl apply -f kube-canal/ -configmap "canal-config" created -daemonset "canal" created -customresourcedefinition "felixconfigurations.crd.projectcalico.org" created -customresourcedefinition "bgpconfigurations.crd.projectcalico.org" created -customresourcedefinition "ippools.crd.projectcalico.org" created -customresourcedefinition "clusterinformations.crd.projectcalico.org" created -customresourcedefinition "globalnetworkpolicies.crd.projectcalico.org" created -customresourcedefinition "networkpolicies.crd.projectcalico.org" created -serviceaccount "canal" created -clusterrole "calico" created -clusterrole "flannel" created -clusterrolebinding "canal-flannel" created -clusterrolebinding "canal-calico" created - -# waiting for all pods to be normal status -$ kubectl get pods --all-namespaces -o wide -NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE -kube-system canal-hpn82 3/3 Running 0 1m 192.168.20.27 devops-master01 -kube-system kube-apiserver-devops-master01 1/1 Running 0 1m 192.168.20.27 devops-master01 -kube-system kube-controller-manager-devops-master01 1/1 Running 0 50s 192.168.20.27 devops-master01 -kube-system kube-dns-6f4fd4bdf-vwbk8 3/3 Running 0 1m 10.244.0.2 devops-master01 -kube-system kube-proxy-mr6l8 1/1 Running 0 1m 192.168.20.27 devops-master01 -kube-system kube-scheduler-devops-master01 1/1 Running 0 57s 192.168.20.27 devops-master01 -``` - -* on devops-master01: install dashboard - -``` -# set master node as schedulable -$ kubectl taint nodes --all node-role.kubernetes.io/master- - -$ kubectl apply -f kube-dashboard/ -serviceaccount "admin-user" created -clusterrolebinding "admin-user" created -secret "kubernetes-dashboard-certs" created -serviceaccount "kubernetes-dashboard" created -role "kubernetes-dashboard-minimal" created -rolebinding "kubernetes-dashboard-minimal" created -deployment "kubernetes-dashboard" created -service "kubernetes-dashboard" created -``` - -* use browser to access dashboard - -> https://devops-master01:30000/#!/login - -* dashboard login interface - -![dashboard-login](images/dashboard-login.png) - -* use command below to get token, copy and paste the token on login interface - -``` -$ kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}') -``` - -![dashboard](images/dashboard.png) - -* on devops-master01: install heapster - -``` -$ kubectl apply -f kube-heapster/influxdb/ -service "monitoring-grafana" created -serviceaccount "heapster" created -deployment "heapster" created -service "heapster" created -deployment "monitoring-influxdb" created -service "monitoring-influxdb" created - -$ kubectl apply -f kube-heapster/rbac/ -clusterrolebinding "heapster" created - -$ kubectl get pods --all-namespaces -NAMESPACE NAME READY STATUS RESTARTS AGE -kube-system canal-hpn82 3/3 Running 0 6m -kube-system heapster-65c5499476-gg2tk 1/1 Running 0 2m -kube-system kube-apiserver-devops-master01 1/1 Running 0 6m -kube-system kube-controller-manager-devops-master01 1/1 Running 0 5m -kube-system kube-dns-6f4fd4bdf-vwbk8 3/3 Running 0 6m -kube-system kube-proxy-mr6l8 1/1 Running 0 6m -kube-system kube-scheduler-devops-master01 1/1 Running 0 6m -kube-system kubernetes-dashboard-7c7bfdd855-2slp2 1/1 Running 0 4m -kube-system monitoring-grafana-6774f65b56-mwdjv 1/1 Running 0 2m -kube-system monitoring-influxdb-59d57d4d58-xmrxk 1/1 Running 0 2m - - -# wait for 5 minutes -$ kubectl top nodes -NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% -devops-master01 242m 0% 1690Mi 0% -``` - -* heapster performance info will show on dashboard - -> https://devops-master01:30000/#!/login - -![heapster-dashboard](images/heapster-dashboard.png) - -![heapster](images/heapster.png) - -* now canal, dashboard, heapster had installed on the first master node - ---- - -[category](#category) - -### kubernetes masters high avialiability configuration - -#### copy configuration files - -* on devops-master01: copy `category/etc/kubernetes/pki` to devops-master02 and devops-master03 - -``` -scp -r /etc/kubernetes/pki devops-master02:/etc/kubernetes/ - -scp -r /etc/kubernetes/pki devops-master03:/etc/kubernetes/ -``` - ---- -[category](#category) - -#### other master nodes init - -* on devops-master02: use kubeadm to init master cluster, make sure pod kube-apiserver-{current-node-name} is in running status - -``` -# you will found that output token and discovery-token-ca-cert-hash are the same with devops-master01 -$ kubeadm init --config=kubeadm-init.yaml -... - kubeadm join --token 7f276c.0741d82a5337f526 192.168.20.28:6443 --discovery-token-ca-cert-hash sha256:a4a1eaf725a0fc67c3028b3063b92e6af7f2eb0f4ae028f12b3415a6fd2d2a5e -``` - -* on devops-master03: use kubeadm to init master cluster, make sure pod kube-apiserver-{current-node-name} is in running status - -``` -# you will found that output token and discovery-token-ca-cert-hash are the same with devops-master01 -$ kubeadm init --config=kubeadm-init.yaml -... - kubeadm join --token 7f276c.0741d82a5337f526 192.168.20.29:6443 --discovery-token-ca-cert-hash sha256:a4a1eaf725a0fc67c3028b3063b92e6af7f2eb0f4ae028f12b3415a6fd2d2a5e -``` - -* on any kubernetes master nodes: check nodes status - -``` -$ kubectl get nodes -NAME STATUS ROLES AGE VERSION -devops-master01 Ready master 19m v1.9.3 -devops-master02 Ready master 4m v1.9.3 -devops-master03 Ready master 4m v1.9.3 -``` - -* on any kubernetes master nodes: check all pod status - -``` -$ kubectl get pods --all-namespaces -o wide -NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE -kube-system canal-cw8tw 3/3 Running 4 3m 192.168.20.29 devops-master03 -kube-system canal-d54hs 3/3 Running 3 5m 192.168.20.28 devops-master02 -kube-system canal-hpn82 3/3 Running 5 17m 192.168.20.27 devops-master01 -kube-system heapster-65c5499476-zwgnh 1/1 Running 1 8m 10.244.0.7 devops-master01 -kube-system kube-apiserver-devops-master01 1/1 Running 1 2m 192.168.20.27 devops-master01 -kube-system kube-apiserver-devops-master02 1/1 Running 0 11s 192.168.20.28 devops-master02 -kube-system kube-apiserver-devops-master03 1/1 Running 0 12s 192.168.20.29 devops-master03 -kube-system kube-controller-manager-devops-master01 1/1 Running 1 16m 192.168.20.27 devops-master01 -kube-system kube-controller-manager-devops-master02 1/1 Running 1 3m 192.168.20.28 devops-master02 -kube-system kube-controller-manager-devops-master03 1/1 Running 1 2m 192.168.20.29 devops-master03 -kube-system kube-dns-6f4fd4bdf-vwbk8 3/3 Running 3 17m 10.244.0.2 devops-master01 -kube-system kube-proxy-59pwn 1/1 Running 1 5m 192.168.20.28 devops-master02 -kube-system kube-proxy-jxt5s 1/1 Running 1 3m 192.168.20.29 devops-master03 -kube-system kube-proxy-mr6l8 1/1 Running 1 17m 192.168.20.27 devops-master01 -kube-system kube-scheduler-devops-master01 1/1 Running 1 16m 192.168.20.27 devops-master01 -kube-system kube-scheduler-devops-master02 1/1 Running 1 3m 192.168.20.28 devops-master02 -kube-system kube-scheduler-devops-master03 1/1 Running 1 2m 192.168.20.29 devops-master03 -kube-system kubernetes-dashboard-7c7bfdd855-2slp2 1/1 Running 1 15m 10.244.0.3 devops-master01 -kube-system monitoring-grafana-6774f65b56-mwdjv 1/1 Running 1 13m 10.244.0.4 devops-master01 -kube-system monitoring-influxdb-59d57d4d58-xmrxk 1/1 Running 1 13m 10.244.0.6 devops-master01 -``` - -* on any kubernetes master nodes: set all master nodes scheduable - -``` -$ kubectl taint nodes --all node-role.kubernetes.io/master- -node "devops-master02" untainted -node "devops-master03" untainted -``` - -* on any kubernetes master nodes: scale the kube-system deployment to all master nodes - -``` -$ kubectl get deploy -n kube-system -NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE -heapster 1 1 1 1 3d -kube-dns 2 2 2 2 4d -kubernetes-dashboard 1 1 1 1 3d -monitoring-grafana 1 1 1 1 3d -monitoring-influxdb 1 1 1 1 3d - -# dns scale to all master nodes -$ kubectl scale --replicas=2 -n kube-system deployment/kube-dns -$ kubectl get pods --all-namespaces -o wide| grep kube-dns -``` - ---- - -[category](#category) - -#### keepalived installation - -* on all kubernetes master nodes: install keepalived service - -``` -$ systemctl restart keepalived - -$ ping 192.168.20.10 -``` - ---- - -[category](#category) - -#### nginx load balancer configuration - -* on all kubernetes master nodes: install nginx load balancer - -``` -$ docker-compose -f nginx-lb/docker-compose.yaml up -d -``` - -* on all kubernetes master nodes: check nginx load balancer and keepalived - -``` -curl -k https://192.168.20.10:16443 -{ - "kind": "Status", - "apiVersion": "v1", - "metadata": { - - }, - "status": "Failure", - "message": "forbidden: User \"system:anonymous\" cannot get path \"/\"", - "reason": "Forbidden", - "details": { - - }, - "code": 403 -} -``` - ---- - -[category](#category) - -#### kube-proxy configuration - -- on any kubernetes master nodes: set kube-proxy server settings, make sure this settings use the keepalived virtual IP and nginx load balancer port (here is: https://192.168.20.10:16443) - -``` -$ kubectl edit -n kube-system configmap/kube-proxy - server: https://192.168.20.10:16443 -``` - -- on any kubernetes master nodes: delete all kube-proxy pod to restart it - -``` -$ kubectl get pods --all-namespaces -o wide | grep proxy - -$ kubectl delete pod -n kube-system kube-proxy-XXX -``` - ---- - -[category](#category) - -### all nodes join the kubernetes cluster - -#### use kubeadm to join the cluster - -- on all kubernetes worker nodes: use kubeadm to join the cluster, here we use the devops-master01 apiserver address and port. - -``` -$ kubeadm join --token 7f276c.0741d82a5337f526 192.168.20.27:6443 --discovery-token-ca-cert-hash sha256:a4a1eaf725a0fc67c3028b3063b92e6af7f2eb0f4ae028f12b3415a6fd2d2a5e -``` - -- on all kubernetes worker nodes: set the `/etc/kubernetes/bootstrap-kubelet.conf` server settings, make sure this settings use the keepalived virtual IP and nginx load balancer port (here is: https://192.168.20.10:16443) - -``` -$ sed -i "s/192.168.20.27:6443/192.168.20.10:16443/g" /etc/kubernetes/bootstrap-kubelet.conf -$ sed -i "s/192.168.20.28:6443/192.168.20.10:16443/g" /etc/kubernetes/bootstrap-kubelet.conf -$ sed -i "s/192.168.20.29:6443/192.168.20.10:16443/g" /etc/kubernetes/bootstrap-kubelet.conf - -$ sed -i "s/192.168.20.27:6443/192.168.20.10:16443/g" /etc/kubernetes/kubelet.conf -$ sed -i "s/192.168.20.28:6443/192.168.20.10:16443/g" /etc/kubernetes/kubelet.conf -$ sed -i "s/192.168.20.29:6443/192.168.20.10:16443/g" /etc/kubernetes/kubelet.conf - -$ grep 192.168.20 /etc/kubernetes/*.conf -/etc/kubernetes/bootstrap-kubelet.conf: server: https://192.168.20.10:16443 -/etc/kubernetes/kubelet.conf: server: https://192.168.20.10:16443 - -$ systemctl restart docker kubelet -``` - - -``` -kubectl get nodes -NAME STATUS ROLES AGE VERSION -devops-master01 Ready master 46m v1.9.3 -devops-master02 Ready master 44m v1.9.3 -devops-master03 Ready master 44m v1.9.3 -devops-node01 Ready 50s v1.9.3 -devops-node02 Ready 26s v1.9.3 -devops-node03 Ready 22s v1.9.3 -devops-node04 Ready 17s v1.9.3 -``` - -- on any kubernetes master nodes: set the worker nodes labels - -``` -kubectl label nodes devops-node01 role=worker -kubectl label nodes devops-node02 role=worker -kubectl label nodes devops-node03 role=worker -kubectl label nodes devops-node04 role=worker -``` - -#### verify kubernetes cluster high availiablity - -- NodePort testing - -``` -# create a nginx deployment, replicas=3 -$ kubectl run nginx --image=nginx --replicas=3 --port=80 -deployment "nginx" created - -# check nginx pod status -$ kubectl get pods -l=run=nginx -o wide -NAME READY STATUS RESTARTS AGE IP NODE -nginx-6c7c8978f5-558kd 1/1 Running 0 9m 10.244.77.217 devops-node03 -nginx-6c7c8978f5-ft2z5 1/1 Running 0 9m 10.244.172.67 devops-master01 -nginx-6c7c8978f5-jr29b 1/1 Running 0 9m 10.244.85.165 devops-node04 - -# create nginx NodePort service -$ kubectl expose deployment nginx --type=NodePort --port=80 -service "nginx" exposed - -# check nginx service status -$ kubectl get svc -l=run=nginx -o wide -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR -nginx NodePort 10.101.144.192 80:30847/TCP 10m run=nginx - -# check nginx NodePort service accessibility -$ curl devops-master01:30847 - - - -Welcome to nginx! - - - -

Welcome to nginx!

-

If you see this page, the nginx web server is successfully installed and -working. Further configuration is required.

- -

For online documentation and support please refer to -nginx.org.
-Commercial support is available at -nginx.com.

- -

Thank you for using nginx.

- - -``` - -- pods connectivity testing - -``` -$ kubectl run nginx-server --image=nginx --port=80 -$ kubectl expose deployment nginx-server --port=80 -$ kubectl get pods -o wide -l=run=nginx-server -NAME READY STATUS RESTARTS AGE IP NODE -nginx-server-6d64689779-lfcxc 1/1 Running 0 2m 10.244.5.7 devops-node03 - -$ kubectl run nginx-client -ti --rm --image=alpine -- ash -/ # wget nginx-server -Connecting to nginx-server (10.102.101.78:80) -index.html 100% |*****************************************| 612 0:00:00 ETA -/ # cat index.html - - - -Welcome to nginx! - - - -

Welcome to nginx!

-

If you see this page, the nginx web server is successfully installed and -working. Further configuration is required.

- -

For online documentation and support please refer to -nginx.org.
-Commercial support is available at -nginx.com.

- -

Thank you for using nginx.

- - - - -$ kubectl delete deploy,svc nginx-server -``` - -- now kubernetes high availiability cluster setup successfully 😃 diff --git a/v1.9/README_CN.md b/v1.9/README_CN.md deleted file mode 100644 index 442e142..0000000 --- a/v1.9/README_CN.md +++ /dev/null @@ -1,1036 +0,0 @@ -# kubeadm-highavailiability - 基于kubeadm的kubernetes高可用集群部署,支持v1.11.x v1.9.x v1.7.x v1.6.x版本 - -![k8s logo](../images/Kubernetes.png) - -- [中文文档(for v1.11.x版本)](../README_CN.md) -- [English document(for v1.11.x version)](../README.md) -- [中文文档(for v1.9.x版本)](../v1.9/README_CN.md) -- [English document(for v1.9.x version)](../v1.9/README.md) -- [中文文档(for v1.7.x版本)](../v1.7/README_CN.md) -- [English document(for v1.7.x version)](../v1.7/README.md) -- [中文文档(for v1.6.x版本)](../v1.6/README_CN.md) -- [English document(for v1.6.x version)](../v1.6/README.md) - ---- - -- [GitHub项目地址](https://github.com/cookeem/kubeadm-ha/) -- [OSChina项目地址](https://git.oschina.net/cookeem/kubeadm-ha/) - ---- - -- 该指引适用于v1.9.x版本的kubernetes集群 - -> v1.9.0以前的版本kubeadm还不支持高可用部署,因此不推荐作为生产环境的部署方式。从v1.9.x版本开始,kubeadm官方正式支持高可用集群的部署,安装kubeadm务必保证版本至少为1.9.0。 - -### 目录 - -1. [部署架构](#部署架构) - 1. [概要部署架构](#概要部署架构) - 1. [详细部署架构](#详细部署架构) - 1. [主机节点清单](#主机节点清单) -1. [安装前准备](#安装前准备) - 1. [版本信息](#版本信息) - 1. [所需docker镜像](#所需docker镜像) - 1. [系统设置](#系统设置) -1. [kubernetes安装](#kubernetes安装) - 1. [firewalld和iptables相关端口设置](#firewalld和iptables相关端口设置) - 1. [kubernetes相关服务安装](#kubernetes相关服务安装) -1. [配置文件初始化](#配置文件初始化) - 1. [初始化脚本配置](#初始化脚本配置) - 1. [独立etcd集群部署](#独立etcd集群部署) -1. [第一台master初始化](#第一台master初始化) - 1. [kubeadm初始化](#kubeadm初始化) - 1. [安装基础组件](#安装基础组件) -1. [master集群高可用设置](#master集群高可用设置) - 1. [复制配置](#复制配置) - 1. [其余master节点初始化](#其余master节点初始化) - 1. [keepalived安装配置](#keepalived安装配置) - 1. [nginx负载均衡配置](#nginx负载均衡配置) - 1. [kube-proxy配置](#kube-proxy配置) -1. [node节点加入高可用集群设置](#node节点加入高可用集群设置) - 1. [kubeadm加入高可用集群](#kubeadm加入高可用集群) - 1. [验证集群高可用设置](#验证集群高可用设置) - - - -### 部署架构 - -#### 概要部署架构 - -![ha logo](../images/ha.png) - -* kubernetes高可用的核心架构是master的高可用,kubectl、客户端以及nodes访问load balancer实现高可用。 - ---- -[返回目录](#目录) - -#### 详细部署架构 - -![k8s ha](../images/k8s-ha.png) - -* kubernetes组件说明 - -> kube-apiserver:集群核心,集群API接口、集群各个组件通信的中枢;集群安全控制; - -> etcd:集群的数据中心,用于存放集群的配置以及状态信息,非常重要,如果数据丢失那么集群将无法恢复;因此高可用集群部署首先就是etcd是高可用集群; - -> kube-scheduler:集群Pod的调度中心;默认kubeadm安装情况下--leader-elect参数已经设置为true,保证master集群中只有一个kube-scheduler处于活跃状态; - -> kube-controller-manager:集群状态管理器,当集群状态与期望不同时,kcm会努力让集群恢复期望状态,比如:当一个pod死掉,kcm会努力新建一个pod来恢复对应replicas set期望的状态;默认kubeadm安装情况下--leader-elect参数已经设置为true,保证master集群中只有一个kube-controller-manager处于活跃状态; - -> kubelet: kubernetes node agent,负责与node上的docker engine打交道; - -> kube-proxy: 每个node上一个,负责service vip到endpoint pod的流量转发,当前主要通过设置iptables规则实现。 - -* 负载均衡 - -> keepalived集群设置一个虚拟ip地址,虚拟ip地址指向devops-master01、devops-master02、devops-master03。 - -> nginx用于devops-master01、devops-master02、devops-master03的apiserver的负载均衡。外部kubectl以及nodes访问apiserver的时候就可以用过keepalived的虚拟ip(192.168.20.10)以及nginx端口(16443)访问master集群的apiserver。 - ---- - -[返回目录](#目录) - -#### 主机节点清单 - -主机名 | IP地址 | 说明 | 组件 -:--- | :--- | :--- | :--- -devops-master01 ~ 03 | 192.168.20.27 ~ 29 | master节点 * 3 | keepalived、nginx、etcd、kubelet、kube-apiserver、kube-scheduler、kube-proxy、kube-dashboard、heapster、calico -无 | 192.168.20.10 | keepalived虚拟IP | 无 -devops-node01 ~ 04 | 192.168.20.17 ~ 20 | node节点 * 4 | kubelet、kube-proxy - ---- - -[返回目录](#目录) - -### 安装前准备 - -#### 版本信息 - -* Linux版本:CentOS 7.4.1708 -* 内核版本: 4.6.4-1.el7.elrepo.x86_64 - - -``` -$ cat /etc/redhat-release -CentOS Linux release 7.4.1708 (Core) - -$ uname -r -4.6.4-1.el7.elrepo.x86_64 -``` - -* docker版本:17.12.0-ce-rc2 - -``` -$ docker version -Client: - Version: 17.12.0-ce-rc2 - API version: 1.35 - Go version: go1.9.2 - Git commit: f9cde63 - Built: Tue Dec 12 06:42:20 2017 - OS/Arch: linux/amd64 - -Server: - Engine: - Version: 17.12.0-ce-rc2 - API version: 1.35 (minimum version 1.12) - Go version: go1.9.2 - Git commit: f9cde63 - Built: Tue Dec 12 06:44:50 2017 - OS/Arch: linux/amd64 - Experimental: false -``` - -* kubeadm版本:v1.9.3 - -``` -$ kubeadm version -kubeadm version: &version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T11:55:20Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"} -``` - -* kubelet版本:v1.9.3 - -``` -$ kubelet --version -Kubernetes v1.9.3 -``` - -* 网络组件 - -> canal (flannel + calico) - ---- - -[返回目录](#目录) - -#### 所需docker镜像 - -* 相关docker镜像以及版本 - -``` -# kuberentes basic components -docker pull gcr.io/google_containers/kube-apiserver-amd64:v1.9.3 -docker pull gcr.io/google_containers/kube-proxy-amd64:v1.9.3 -docker pull gcr.io/google_containers/kube-scheduler-amd64:v1.9.3 -docker pull gcr.io/google_containers/kube-controller-manager-amd64:v1.9.3 -docker pull gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.7 -docker pull gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.7 -docker pull gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7 -docker pull gcr.io/google_containers/etcd-amd64:3.1.10 -docker pull gcr.io/google_containers/pause-amd64:3.0 - -# kubernetes networks add ons -docker pull quay.io/coreos/flannel:v0.9.1-amd64 -docker pull quay.io/calico/node:v3.0.3 -docker pull quay.io/calico/kube-controllers:v2.0.1 -docker pull quay.io/calico/cni:v2.0.1 - -# kubernetes dashboard -docker pull gcr.io/google_containers/kubernetes-dashboard-amd64:v1.8.3 - -# kubernetes heapster -docker pull gcr.io/google_containers/heapster-influxdb-amd64:v1.3.3 -docker pull gcr.io/google_containers/heapster-grafana-amd64:v4.4.3 -docker pull gcr.io/google_containers/heapster-amd64:v1.4.2 - -# kubernetes apiserver load balancer -docker pull nginx:latest -``` - ---- - -[返回目录](#目录) - -#### 系统设置 - -* 在所有kubernetes节点上增加kubernetes仓库 - -``` -$ cat < /etc/yum.repos.d/kubernetes.repo -[kubernetes] -name=Kubernetes -baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 -enabled=1 -gpgcheck=1 -repo_gpgcheck=1 -gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg -EOF -``` - -* 在所有kubernetes节点上进行系统更新 - -``` -$ yum update -y -``` - -* 在所有kubernetes节点上设置SELINUX为permissive模式 - -``` -$ vi /etc/selinux/config -SELINUX=permissive - -$ setenforce 0 -``` - -* 在所有kubernetes节点上设置iptables参数,否则kubeadm init会提示错误 - -``` -$ cat < /etc/sysctl.d/k8s.conf -net.bridge.bridge-nf-call-ip6tables = 1 -net.bridge.bridge-nf-call-iptables = 1 -net.ipv4.ip_forward = 1 -EOF - -sysctl --system -``` - -* 在所有kubernetes节点上禁用swap - -``` -$ swapoff -a - -# 禁用fstab中的swap项目 -$ vi /etc/fstab -#/dev/mapper/centos-swap swap swap defaults 0 0 - -# 确认swap已经被禁用 -$ cat /proc/swaps -Filename Type Size Used Priority -``` - -* 在所有kubernetes节点上重启主机 - -``` -$ reboot -``` - ---- - -[返回目录](#目录) - -### kubernetes安装 - -#### firewalld和iptables相关端口设置 - -- 相关端口(master) - -协议 | 方向 | 端口 | 说明 -:--- | :--- | :--- | :--- -TCP | Inbound | 16443* | Load balancer Kubernetes API server port -TCP | Inbound | 6443* | Kubernetes API server -TCP | Inbound | 4001 | etcd listen client port -TCP | Inbound | 2379-2380 | etcd server client API -TCP | Inbound | 10250 | Kubelet API -TCP | Inbound | 10251 | kube-scheduler -TCP | Inbound | 10252 | kube-controller-manager -TCP | Inbound | 10255 | Read-only Kubelet API -TCP | Inbound | 30000-32767 | NodePort Services - -- 在所有master节点上开放相关firewalld端口(因为以上服务基于docker部署,如果docker版本为17.x,可以不进行以下设置,因为docker会自动修改iptables添加相关端口) - -``` -$ systemctl status firewalld - -$ firewall-cmd --zone=public --add-port=16443/tcp --permanent -$ firewall-cmd --zone=public --add-port=6443/tcp --permanent -$ firewall-cmd --zone=public --add-port=4001/tcp --permanent -$ firewall-cmd --zone=public --add-port=2379-2380/tcp --permanent -$ firewall-cmd --zone=public --add-port=10250/tcp --permanent -$ firewall-cmd --zone=public --add-port=10251/tcp --permanent -$ firewall-cmd --zone=public --add-port=10252/tcp --permanent -$ firewall-cmd --zone=public --add-port=10255/tcp --permanent -$ firewall-cmd --zone=public --add-port=30000-32767/tcp --permanent - -$ firewall-cmd --reload - -$ firewall-cmd --list-all --zone=public -public (active) - target: default - icmp-block-inversion: no - interfaces: ens2f1 ens1f0 nm-bond - sources: - services: ssh dhcpv6-client - ports: 4001/tcp 6443/tcp 2379-2380/tcp 10250/tcp 10251/tcp 10252/tcp 10255/tcp 30000-32767/tcp - protocols: - masquerade: no - forward-ports: - source-ports: - icmp-blocks: - rich rules: -``` - -- 相关端口(worker) - -协议 | 方向 | 端口 | 说明 -:--- | :--- | :--- | :--- -TCP | Inbound | 10250 | Kubelet API -TCP | Inbound | 10255 | Read-only Kubelet API -TCP | Inbound | 30000-32767 | NodePort Services - -- 在所有worker节点上开放相关firewalld端口(因为以上服务基于docker部署,如果docker版本为17.x,可以不进行以下设置,因为docker会自动修改iptables添加相关端口) - -``` -$ systemctl status firewalld - -$ firewall-cmd --zone=public --add-port=10250/tcp --permanent -$ firewall-cmd --zone=public --add-port=10255/tcp --permanent -$ firewall-cmd --zone=public --add-port=30000-32767/tcp --permanent - -$ firewall-cmd --reload - -$ firewall-cmd --list-all --zone=public -public (active) - target: default - icmp-block-inversion: no - interfaces: ens2f1 ens1f0 nm-bond - sources: - services: ssh dhcpv6-client - ports: 10250/tcp 10255/tcp 30000-32767/tcp - protocols: - masquerade: no - forward-ports: - source-ports: - icmp-blocks: - rich rules: -``` - -* 在所有kubernetes节点上允许kube-proxy的forward - -``` -$ firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 1 -i docker0 -j ACCEPT -m comment --comment "kube-proxy redirects" -$ firewall-cmd --permanent --direct --add-rule ipv4 filter FORWARD 1 -o docker0 -j ACCEPT -m comment --comment "docker subnet" -$ firewall-cmd --permanent --direct --add-rule ipv4 filter FORWARD 1 -i flannel.1 -j ACCEPT -m comment --comment "flannel subnet" -$ firewall-cmd --permanent --direct --add-rule ipv4 filter FORWARD 1 -o flannel.1 -j ACCEPT -m comment --comment "flannel subnet" -$ firewall-cmd --reload - -$ firewall-cmd --direct --get-all-rules -ipv4 filter INPUT 1 -i docker0 -j ACCEPT -m comment --comment 'kube-proxy redirects' -ipv4 filter FORWARD 1 -o docker0 -j ACCEPT -m comment --comment 'docker subnet' -ipv4 filter FORWARD 1 -i flannel.1 -j ACCEPT -m comment --comment 'flannel subnet' -ipv4 filter FORWARD 1 -o flannel.1 -j ACCEPT -m comment --comment 'flannel subnet' -``` - -- 在所有kubernetes节点上,删除iptables的设置,解决kube-proxy无法启用nodePort。(注意:每次重启firewalld必须执行以下命令) - -``` -iptables -D INPUT -j REJECT --reject-with icmp-host-prohibited -``` - ---- - -[返回目录](#目录) - -#### kubernetes相关服务安装 - -* 在所有kubernetes节点上验证SELINUX模式,必须保证SELINUX为permissive模式,否则kubernetes启动会出现各种异常 - -``` -$ getenforce -Permissive -``` - -* 在所有kubernetes节点上安装并启动kubernetes - -``` -$ yum install -y docker-ce-17.12.0.ce-0.2.rc2.el7.centos.x86_64 -$ yum install -y docker-compose-1.9.0-5.el7.noarch -$ systemctl enable docker && systemctl start docker - -$ yum install -y kubelet-1.9.3-0.x86_64 kubeadm-1.9.3-0.x86_64 kubectl-1.9.3-0.x86_64 -$ systemctl enable kubelet && systemctl start kubelet -``` - -* 在所有kubernetes节点上设置kubelet使用cgroupfs,与dockerd保持一致,否则kubelet会启动报错 - -``` -# 默认kubelet使用的cgroup-driver=systemd,改为cgroup-driver=cgroupfs -$ vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf -#Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd" -Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs" - -# 重设kubelet服务,并重启kubelet服务 -$ systemctl daemon-reload && systemctl restart kubelet -``` - -* 在所有master节点上安装并启动keepalived - -``` -$ yum install -y keepalived -$ systemctl enable keepalived && systemctl restart keepalived -``` - ---- - -[返回目录](#目录) - -### 配置文件初始化 - -#### 初始化脚本配置 - -* 在所有master节点上获取代码,并进入代码目录 - -``` -$ git clone https://github.com/cookeem/kubeadm-ha - -$ cd kubeadm-ha -``` - -* 在所有master节点上设置初始化脚本配置,每一项配置参见脚本中的配置说明,请务必正确配置。该脚本用于生成相关重要的配置文件 - -``` -$ vi create-config.sh - -# local machine ip address -export K8SHA_IPLOCAL=192.168.20.27 - -# local machine etcd name, options: etcd1, etcd2, etcd3 -export K8SHA_ETCDNAME=etcd1 - -# local machine keepalived state config, options: MASTER, BACKUP. One keepalived cluster only one MASTER, other's are BACKUP -export K8SHA_KA_STATE=MASTER - -# local machine keepalived priority config, options: 102, 101, 100. MASTER must 102 -export K8SHA_KA_PRIO=102 - -# local machine keepalived network interface name config, for example: eth0 -export K8SHA_KA_INTF=nm-bond - -####################################### -# all masters settings below must be same -####################################### - -# master keepalived virtual ip address -export K8SHA_IPVIRTUAL=192.168.20.10 - -# master01 ip address -export K8SHA_IP1=192.168.20.27 - -# master02 ip address -export K8SHA_IP2=192.168.20.28 - -# master03 ip address -export K8SHA_IP3=192.168.20.29 - -# master01 hostname -export K8SHA_HOSTNAME1=devops-master01 - -# master02 hostname -export K8SHA_HOSTNAME2=devops-master02 - -# master03 hostname -export K8SHA_HOSTNAME3=devops-master03 - -# keepalived auth_pass config, all masters must be same -export K8SHA_KA_AUTH=4cdf7dc3b4c90194d1600c483e10ad1d - -# kubernetes cluster token, you can use 'kubeadm token generate' to get a new one -export K8SHA_TOKEN=7f276c.0741d82a5337f526 - -# kubernetes CIDR pod subnet, if CIDR pod subnet is "10.244.0.0/16" please set to "10.244.0.0\\/16" -export K8SHA_CIDR=10.244.0.0\\/16 - -# kubernetes CIDR service subnet, if CIDR service subnet is "10.96.0.0/12" please set to "10.96.0.0\\/12" -export K8SHA_SVC_CIDR=10.96.0.0\\/12 - -# calico network settings, set a reachable ip address for the cluster network interface, for example you can use the gateway ip address -export K8SHA_CALICO_REACHABLE_IP=192.168.20.1 -``` - -* 在所有master节点上运行配置脚本,创建对应的配置文件,配置文件包括: - -> etcd集群docker-compose.yaml文件 - -> keepalived配置文件 - -> nginx负载均衡集群docker-compose.yaml文件 - -> kubeadm init 配置文件 - -> canal配置文件 - -``` -$ ./create-config.sh -set etcd cluster docker-compose.yaml file success: etcd/docker-compose.yaml -set keepalived config file success: /etc/keepalived/keepalived.conf -set nginx load balancer config file success: nginx-lb/nginx-lb.conf -set kubeadm init config file success: kubeadm-init.yaml -set canal deployment config file success: kube-canal/canal.yaml -``` - ---- - -[返回目录](#目录) - -#### 独立etcd集群部署 - -* 在所有master节点上重置并启动etcd集群(非TLS模式) - -``` -# 重置kubernetes集群 -$ kubeadm reset - -# 清空etcd集群数据 -$ rm -rf /var/lib/etcd-cluster - -# 重置并启动etcd集群 -$ docker-compose --file etcd/docker-compose.yaml stop -$ docker-compose --file etcd/docker-compose.yaml rm -f -$ docker-compose --file etcd/docker-compose.yaml up -d - -# 验证etcd集群状态是否正常 - -$ docker exec -ti etcd etcdctl cluster-health -member 531504c79088f553 is healthy: got healthy result from http://192.168.20.29:2379 -member 56c53113d5e1cfa3 is healthy: got healthy result from http://192.168.20.27:2379 -member 7026e604579e4d64 is healthy: got healthy result from http://192.168.20.28:2379 -cluster is healthy - -$ docker exec -ti etcd etcdctl member list -531504c79088f553: name=etcd3 peerURLs=http://192.168.20.29:2380 clientURLs=http://192.168.20.29:2379,http://192.168.20.29:4001 isLeader=false -56c53113d5e1cfa3: name=etcd1 peerURLs=http://192.168.20.27:2380 clientURLs=http://192.168.20.27:2379,http://192.168.20.27:4001 isLeader=false -7026e604579e4d64: name=etcd2 peerURLs=http://192.168.20.28:2380 clientURLs=http://192.168.20.28:2379,http://192.168.20.28:4001 isLeader=true -``` - ---- - -[返回目录](#目录) - -### 第一台master初始化 - -#### kubeadm初始化 - -* 在所有master节点上重置网络 - -``` -$ systemctl stop kubelet -$ systemctl stop docker -$ rm -rf /var/lib/cni/ -$ rm -rf /var/lib/kubelet/* -$ rm -rf /etc/cni/ - -# 删除遗留的网络接口 -$ ip a | grep -E 'docker|flannel|cni' -$ ip link del docker0 -$ ip link del flannel.1 -$ ip link del cni0 - -$ systemctl restart docker && systemctl restart kubelet -$ ip a | grep -E 'docker|flannel|cni' -``` - -* 在devops-master01上进行初始化,注意,务必把输出的kubeadm join --token XXX --discovery-token-ca-cert-hash YYY 信息记录下来,后续操作需要用到 - -``` -$ kubeadm init --config=kubeadm-init.yaml -... - kubeadm join --token 7f276c.0741d82a5337f526 192.168.20.27:6443 --discovery-token-ca-cert-hash sha256:a4a1eaf725a0fc67c3028b3063b92e6af7f2eb0f4ae028f12b3415a6fd2d2a5e -``` - -* 在所有master节点上设置kubectl客户端连接 - -``` -$ vi ~/.bashrc -export KUBECONFIG=/etc/kubernetes/admin.conf - -$ source ~/.bashrc -``` - -#### 安装基础组件 - -* 在devops-master01上安装flannel网络组件 - -``` -# 没有网络组件的情况下,节点状态是不正常的 -$ kubectl get node -NAME STATUS ROLES AGE VERSION -devops-master01 NotReady master 14s v1.9.1 - -# 安装canal网络组件 -$ kubectl apply -f kube-canal/ -configmap "canal-config" created -daemonset "canal" created -customresourcedefinition "felixconfigurations.crd.projectcalico.org" created -customresourcedefinition "bgpconfigurations.crd.projectcalico.org" created -customresourcedefinition "ippools.crd.projectcalico.org" created -customresourcedefinition "clusterinformations.crd.projectcalico.org" created -customresourcedefinition "globalnetworkpolicies.crd.projectcalico.org" created -customresourcedefinition "networkpolicies.crd.projectcalico.org" created -serviceaccount "canal" created -clusterrole "calico" created -clusterrole "flannel" created -clusterrolebinding "canal-flannel" created -clusterrolebinding "canal-calico" created - -# 等待所有pods正常 -$ kubectl get pods --all-namespaces -o wide -NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE -kube-system canal-hpn82 3/3 Running 0 1m 192.168.20.27 devops-master01 -kube-system kube-apiserver-devops-master01 1/1 Running 0 1m 192.168.20.27 devops-master01 -kube-system kube-controller-manager-devops-master01 1/1 Running 0 50s 192.168.20.27 devops-master01 -kube-system kube-dns-6f4fd4bdf-vwbk8 3/3 Running 0 1m 10.244.0.2 devops-master01 -kube-system kube-proxy-mr6l8 1/1 Running 0 1m 192.168.20.27 devops-master01 -kube-system kube-scheduler-devops-master01 1/1 Running 0 57s 192.168.20.27 devops-master01 -``` - -* 在devops-master01上安装dashboard - -``` -# 设置master节点为schedulable -$ kubectl taint nodes --all node-role.kubernetes.io/master- - -$ kubectl apply -f kube-dashboard/ -serviceaccount "admin-user" created -clusterrolebinding "admin-user" created -secret "kubernetes-dashboard-certs" created -serviceaccount "kubernetes-dashboard" created -role "kubernetes-dashboard-minimal" created -rolebinding "kubernetes-dashboard-minimal" created -deployment "kubernetes-dashboard" created -service "kubernetes-dashboard" created -``` - -* 通过浏览器访问dashboard地址 - -> https://devops-master01:30000/#!/login - -* dashboard登录页面效果如下图 - -![dashboard-login](images/dashboard-login.png) - -* 获取token,把token粘贴到login页面的token中,即可进入dashboard - -``` -$ kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}') -``` - -![dashboard](images/dashboard.png) - -* 在devops-master01上安装heapster - -``` -$ kubectl apply -f kube-heapster/influxdb/ -service "monitoring-grafana" created -serviceaccount "heapster" created -deployment "heapster" created -service "heapster" created -deployment "monitoring-influxdb" created -service "monitoring-influxdb" created - -$ kubectl apply -f kube-heapster/rbac/ -clusterrolebinding "heapster" created - -$ kubectl get pods --all-namespaces -NAMESPACE NAME READY STATUS RESTARTS AGE -kube-system canal-hpn82 3/3 Running 0 6m -kube-system heapster-65c5499476-gg2tk 1/1 Running 0 2m -kube-system kube-apiserver-devops-master01 1/1 Running 0 6m -kube-system kube-controller-manager-devops-master01 1/1 Running 0 5m -kube-system kube-dns-6f4fd4bdf-vwbk8 3/3 Running 0 6m -kube-system kube-proxy-mr6l8 1/1 Running 0 6m -kube-system kube-scheduler-devops-master01 1/1 Running 0 6m -kube-system kubernetes-dashboard-7c7bfdd855-2slp2 1/1 Running 0 4m -kube-system monitoring-grafana-6774f65b56-mwdjv 1/1 Running 0 2m -kube-system monitoring-influxdb-59d57d4d58-xmrxk 1/1 Running 0 2m - - -# 等待5分钟 -$ kubectl top nodes -NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% -devops-master01 242m 0% 1690Mi 0% -``` - -* 访问dashboard地址,等10分钟,就会显示性能数据 - -> https://devops-master01:30000/#!/login - -![heapster-dashboard](images/heapster-dashboard.png) - -![heapster](images/heapster.png) - -* 至此,第一台master成功安装,并已经完成canal、dashboard、heapster的部署 - ---- - -[返回目录](#目录) - -### master集群高可用设置 - -#### 复制配置 - -* 在devops-master01上复制目录/etc/kubernetes/pki到devops-master02、devops-master03,从v1.9.x开始,kubeadm会检测pki目录是否有证书,如果已经存在证书则跳过证书生成的步骤 - -``` -scp -r /etc/kubernetes/pki devops-master02:/etc/kubernetes/ - -scp -r /etc/kubernetes/pki devops-master03:/etc/kubernetes/ -``` - ---- -[返回目录](#目录) - -#### 其余master节点初始化 - -* 在devops-master02进行初始化,等待所有pods正常启动后再进行下一个master初始化,特别要保证kube-apiserver-{current-node-name}处于running状态 - -``` -# 输出的token和discovery-token-ca-cert-hash应该与devops-master01上的完全一致 -$ kubeadm init --config=kubeadm-init.yaml -... - kubeadm join --token 7f276c.0741d82a5337f526 192.168.20.28:6443 --discovery-token-ca-cert-hash sha256:a4a1eaf725a0fc67c3028b3063b92e6af7f2eb0f4ae028f12b3415a6fd2d2a5e -``` - -* 在devops-master03进行初始化,等待所有pods正常启动后再进行下一个master初始化,特别要保证kube-apiserver-{current-node-name}处于running状态 - -``` -# 输出的token和discovery-token-ca-cert-hash应该与devops-master01上的完全一致 -$ kubeadm init --config=kubeadm-init.yaml -... - kubeadm join --token 7f276c.0741d82a5337f526 192.168.20.29:6443 --discovery-token-ca-cert-hash sha256:a4a1eaf725a0fc67c3028b3063b92e6af7f2eb0f4ae028f12b3415a6fd2d2a5e -``` - -* 在devops-master01上检查nodes加入情况 - -``` -$ kubectl get nodes -NAME STATUS ROLES AGE VERSION -devops-master01 Ready master 19m v1.9.3 -devops-master02 Ready master 4m v1.9.3 -devops-master03 Ready master 4m v1.9.3 -``` - -* 在devops-master01上检查高可用状态 - -``` -$ kubectl get pods --all-namespaces -o wide -NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE -kube-system canal-cw8tw 3/3 Running 4 3m 192.168.20.29 devops-master03 -kube-system canal-d54hs 3/3 Running 3 5m 192.168.20.28 devops-master02 -kube-system canal-hpn82 3/3 Running 5 17m 192.168.20.27 devops-master01 -kube-system heapster-65c5499476-zwgnh 1/1 Running 1 8m 10.244.0.7 devops-master01 -kube-system kube-apiserver-devops-master01 1/1 Running 1 2m 192.168.20.27 devops-master01 -kube-system kube-apiserver-devops-master02 1/1 Running 0 11s 192.168.20.28 devops-master02 -kube-system kube-apiserver-devops-master03 1/1 Running 0 12s 192.168.20.29 devops-master03 -kube-system kube-controller-manager-devops-master01 1/1 Running 1 16m 192.168.20.27 devops-master01 -kube-system kube-controller-manager-devops-master02 1/1 Running 1 3m 192.168.20.28 devops-master02 -kube-system kube-controller-manager-devops-master03 1/1 Running 1 2m 192.168.20.29 devops-master03 -kube-system kube-dns-6f4fd4bdf-vwbk8 3/3 Running 3 17m 10.244.0.2 devops-master01 -kube-system kube-proxy-59pwn 1/1 Running 1 5m 192.168.20.28 devops-master02 -kube-system kube-proxy-jxt5s 1/1 Running 1 3m 192.168.20.29 devops-master03 -kube-system kube-proxy-mr6l8 1/1 Running 1 17m 192.168.20.27 devops-master01 -kube-system kube-scheduler-devops-master01 1/1 Running 1 16m 192.168.20.27 devops-master01 -kube-system kube-scheduler-devops-master02 1/1 Running 1 3m 192.168.20.28 devops-master02 -kube-system kube-scheduler-devops-master03 1/1 Running 1 2m 192.168.20.29 devops-master03 -kube-system kubernetes-dashboard-7c7bfdd855-2slp2 1/1 Running 1 15m 10.244.0.3 devops-master01 -kube-system monitoring-grafana-6774f65b56-mwdjv 1/1 Running 1 13m 10.244.0.4 devops-master01 -kube-system monitoring-influxdb-59d57d4d58-xmrxk 1/1 Running 1 13m 10.244.0.6 devops-master01 -``` - -* 设置所有master的scheduable - -``` -$ kubectl taint nodes --all node-role.kubernetes.io/master- -node "devops-master02" untainted -node "devops-master03" untainted -``` - -* 对基础组件进行多节点scale - -``` -$ kubectl get deploy -n kube-system -NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE -heapster 1 1 1 1 3d -kube-dns 2 2 2 2 4d -kubernetes-dashboard 1 1 1 1 3d -monitoring-grafana 1 1 1 1 3d -monitoring-influxdb 1 1 1 1 3d - -# dns支持多节点 -$ kubectl scale --replicas=2 -n kube-system deployment/kube-dns -$ kubectl get pods --all-namespaces -o wide| grep kube-dns - -``` - ---- - -[返回目录](#目录) - -#### keepalived安装配置 - -* 在master上安装keepalived - -``` -$ systemctl restart keepalived - -$ ping 192.168.20.10 -``` - ---- - -[返回目录](#目录) - -#### nginx负载均衡配置 - -* 在master上安装并启动nginx作为负载均衡 - -``` -$ docker-compose -f nginx-lb/docker-compose.yaml up -d -``` - -* 在master上验证负载均衡和keepalived是否成功 - -``` -curl -k https://192.168.20.10:16443 -{ - "kind": "Status", - "apiVersion": "v1", - "metadata": { - - }, - "status": "Failure", - "message": "forbidden: User \"system:anonymous\" cannot get path \"/\"", - "reason": "Forbidden", - "details": { - - }, - "code": 403 -} -``` - ---- - -[返回目录](#目录) - -#### kube-proxy配置 - -- 在devops-master01上设置proxy高可用,设置server指向高可用虚拟IP以及负载均衡的16443端口 -``` -$ kubectl edit -n kube-system configmap/kube-proxy - server: https://192.168.20.10:16443 -``` - -- 在master上重启proxy - -``` -$ kubectl get pods --all-namespaces -o wide | grep proxy - -$ kubectl delete pod -n kube-system kube-proxy-XXX -``` - ---- - -[返回目录](#目录) - -### node节点加入高可用集群设置 - -#### kubeadm加入高可用集群 - -- 在所有worker节点上进行加入kubernetes集群操作,这里统一使用devops-master01的apiserver地址来加入集群 - -``` -$ kubeadm join --token 7f276c.0741d82a5337f526 192.168.20.27:6443 --discovery-token-ca-cert-hash sha256:a4a1eaf725a0fc67c3028b3063b92e6af7f2eb0f4ae028f12b3415a6fd2d2a5e -``` - -- 在所有worker节点上修改kubernetes集群设置,更改server为高可用虚拟IP以及负载均衡的16443端口 - -``` -$ sed -i "s/192.168.20.27:6443/192.168.20.10:16443/g" /etc/kubernetes/bootstrap-kubelet.conf -$ sed -i "s/192.168.20.28:6443/192.168.20.10:16443/g" /etc/kubernetes/bootstrap-kubelet.conf -$ sed -i "s/192.168.20.29:6443/192.168.20.10:16443/g" /etc/kubernetes/bootstrap-kubelet.conf - -$ sed -i "s/192.168.20.27:6443/192.168.20.10:16443/g" /etc/kubernetes/kubelet.conf -$ sed -i "s/192.168.20.28:6443/192.168.20.10:16443/g" /etc/kubernetes/kubelet.conf -$ sed -i "s/192.168.20.29:6443/192.168.20.10:16443/g" /etc/kubernetes/kubelet.conf - -$ grep 192.168.20 /etc/kubernetes/*.conf -/etc/kubernetes/bootstrap-kubelet.conf: server: https://192.168.20.10:16443 -/etc/kubernetes/kubelet.conf: server: https://192.168.20.10:16443 - -$ systemctl restart docker kubelet -``` - - -``` -kubectl get nodes -NAME STATUS ROLES AGE VERSION -devops-master01 Ready master 46m v1.9.3 -devops-master02 Ready master 44m v1.9.3 -devops-master03 Ready master 44m v1.9.3 -devops-node01 Ready 50s v1.9.3 -devops-node02 Ready 26s v1.9.3 -devops-node03 Ready 22s v1.9.3 -devops-node04 Ready 17s v1.9.3 -``` - -- 设置workers的节点标签 - -``` -kubectl label nodes devops-node01 role=worker -kubectl label nodes devops-node02 role=worker -kubectl label nodes devops-node03 role=worker -kubectl label nodes devops-node04 role=worker -``` - -#### 验证集群高可用设置 - -- NodePort测试 - -``` -# 创建一个replicas=3的nginx deployment -$ kubectl run nginx --image=nginx --replicas=3 --port=80 -deployment "nginx" created - -# 检查nginx pod的创建情况 -$ kubectl get pods -l=run=nginx -o wide -NAME READY STATUS RESTARTS AGE IP NODE -nginx-6c7c8978f5-558kd 1/1 Running 0 9m 10.244.77.217 devops-node03 -nginx-6c7c8978f5-ft2z5 1/1 Running 0 9m 10.244.172.67 devops-master01 -nginx-6c7c8978f5-jr29b 1/1 Running 0 9m 10.244.85.165 devops-node04 - -# 创建nginx的NodePort service -$ kubectl expose deployment nginx --type=NodePort --port=80 -service "nginx" exposed - -# 检查nginx service的创建情况 -$ kubectl get svc -l=run=nginx -o wide -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR -nginx NodePort 10.101.144.192 80:30847/TCP 10m run=nginx - -# 检查nginx NodePort service是否正常提供服务 -$ curl devops-master01:30847 - - - -Welcome to nginx! - - - -

Welcome to nginx!

-

If you see this page, the nginx web server is successfully installed and -working. Further configuration is required.

- -

For online documentation and support please refer to -nginx.org.
-Commercial support is available at -nginx.com.

- -

Thank you for using nginx.

- - - -$ kubectl delete deploy,svc nginx -``` - -- pod之间互访测试 - -``` -$ kubectl run nginx-server --image=nginx --port=80 -$ kubectl expose deployment nginx-server --port=80 -$ kubectl get pods -o wide -l=run=nginx-server -NAME READY STATUS RESTARTS AGE IP NODE -nginx-server-6d64689779-lfcxc 1/1 Running 0 2m 10.244.5.7 devops-node03 - -$ kubectl run nginx-client -ti --rm --image=alpine -- ash -/ # wget nginx-server -Connecting to nginx-server (10.102.101.78:80) -index.html 100% |*****************************************| 612 0:00:00 ETA -/ # cat index.html - - - -Welcome to nginx! - - - -

Welcome to nginx!

-

If you see this page, the nginx web server is successfully installed and -working. Further configuration is required.

- -

For online documentation and support please refer to -nginx.org.
-Commercial support is available at -nginx.com.

- -

Thank you for using nginx.

- - - - -$ kubectl delete deploy,svc nginx-server -``` - -- 至此kubernetes高可用集群完成部署😃 - diff --git a/v1.9/create-config.sh b/v1.9/create-config.sh deleted file mode 100755 index 6e21a28..0000000 --- a/v1.9/create-config.sh +++ /dev/null @@ -1,121 +0,0 @@ -#!/bin/bash - -# local machine ip address -export K8SHA_IPLOCAL=192.168.20.27 - -# local machine etcd name, options: etcd1, etcd2, etcd3 -export K8SHA_ETCDNAME=etcd1 - -# local machine keepalived state config, options: MASTER, BACKUP. One keepalived cluster only one MASTER, other's are BACKUP -export K8SHA_KA_STATE=MASTER - -# local machine keepalived priority config, options: 102, 101, 100. MASTER must 102 -export K8SHA_KA_PRIO=102 - -# local machine keepalived network interface name config, for example: eth0 -export K8SHA_KA_INTF=nm-bond - -####################################### -# all masters settings below must be same -####################################### - -# master keepalived virtual ip address -export K8SHA_IPVIRTUAL=192.168.20.10 - -# master01 ip address -export K8SHA_IP1=192.168.20.27 - -# master02 ip address -export K8SHA_IP2=192.168.20.28 - -# master03 ip address -export K8SHA_IP3=192.168.20.29 - -# master01 hostname -export K8SHA_HOSTNAME1=devops-master01 - -# master02 hostname -export K8SHA_HOSTNAME2=devops-master02 - -# master03 hostname -export K8SHA_HOSTNAME3=devops-master03 - -# keepalived auth_pass config, all masters must be same -export K8SHA_KA_AUTH=4cdf7dc3b4c90194d1600c483e10ad1d - -# kubernetes cluster token, you can use 'kubeadm token generate' to get a new one -export K8SHA_TOKEN=7f276c.0741d82a5337f526 - -# kubernetes CIDR pod subnet, if CIDR pod subnet is "10.244.0.0/16" please set to "10.244.0.0\\/16" -export K8SHA_CIDR=10.244.0.0\\/16 - -# kubernetes CIDR service subnet, if CIDR service subnet is "10.96.0.0/12" please set to "10.96.0.0\\/12" -export K8SHA_SVC_CIDR=10.96.0.0\\/12 - -# calico network settings, set a reachable ip address for the cluster network interface, for example you can use the gateway ip address -export K8SHA_CALICO_REACHABLE_IP=192.168.20.1 - -############################## -# please do not modify anything below -############################## - -# set etcd cluster docker-compose.yaml file -sed \ --e "s/K8SHA_ETCDNAME/$K8SHA_ETCDNAME/g" \ --e "s/K8SHA_IPLOCAL/$K8SHA_IPLOCAL/g" \ --e "s/K8SHA_IP1/$K8SHA_IP1/g" \ --e "s/K8SHA_IP2/$K8SHA_IP2/g" \ --e "s/K8SHA_IP3/$K8SHA_IP3/g" \ -etcd/docker-compose.yaml.tpl > etcd/docker-compose.yaml - -echo 'set etcd cluster docker-compose.yaml file success: etcd/docker-compose.yaml' - -# set keepalived config file -mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak - -cp keepalived/check_apiserver.sh /etc/keepalived/ - -sed \ --e "s/K8SHA_KA_STATE/$K8SHA_KA_STATE/g" \ --e "s/K8SHA_KA_INTF/$K8SHA_KA_INTF/g" \ --e "s/K8SHA_IPLOCAL/$K8SHA_IPLOCAL/g" \ --e "s/K8SHA_KA_PRIO/$K8SHA_KA_PRIO/g" \ --e "s/K8SHA_IPVIRTUAL/$K8SHA_IPVIRTUAL/g" \ --e "s/K8SHA_KA_AUTH/$K8SHA_KA_AUTH/g" \ -keepalived/keepalived.conf.tpl > /etc/keepalived/keepalived.conf - -echo 'set keepalived config file success: /etc/keepalived/keepalived.conf' - -# set nginx load balancer config file -sed \ --e "s/K8SHA_IP1/$K8SHA_IP1/g" \ --e "s/K8SHA_IP2/$K8SHA_IP2/g" \ --e "s/K8SHA_IP3/$K8SHA_IP3/g" \ -nginx-lb/nginx-lb.conf.tpl > nginx-lb/nginx-lb.conf - -echo 'set nginx load balancer config file success: nginx-lb/nginx-lb.conf' - -# set kubeadm init config file -sed \ --e "s/K8SHA_HOSTNAME1/$K8SHA_HOSTNAME1/g" \ --e "s/K8SHA_HOSTNAME2/$K8SHA_HOSTNAME2/g" \ --e "s/K8SHA_HOSTNAME3/$K8SHA_HOSTNAME3/g" \ --e "s/K8SHA_IP1/$K8SHA_IP1/g" \ --e "s/K8SHA_IP2/$K8SHA_IP2/g" \ --e "s/K8SHA_IP3/$K8SHA_IP3/g" \ --e "s/K8SHA_IPVIRTUAL/$K8SHA_IPVIRTUAL/g" \ --e "s/K8SHA_TOKEN/$K8SHA_TOKEN/g" \ --e "s/K8SHA_CIDR/$K8SHA_CIDR/g" \ --e "s/K8SHA_SVC_CIDR/$K8SHA_SVC_CIDR/g" \ -kubeadm-init.yaml.tpl > kubeadm-init.yaml - -echo 'set kubeadm init config file success: kubeadm-init.yaml' - -# set canal deployment config file - -sed \ --e "s/K8SHA_CIDR/$K8SHA_CIDR/g" \ --e "s/K8SHA_CALICO_REACHABLE_IP/$K8SHA_CALICO_REACHABLE_IP/g" \ -kube-canal/canal.yaml.tpl > kube-canal/canal.yaml - -echo 'set canal deployment config file success: kube-canal/canal.yaml' diff --git a/v1.9/etcd/docker-compose.yaml.tpl b/v1.9/etcd/docker-compose.yaml.tpl deleted file mode 100644 index 48b5dca..0000000 --- a/v1.9/etcd/docker-compose.yaml.tpl +++ /dev/null @@ -1,25 +0,0 @@ -version: '2' -services: - etcd: - image: gcr.io/google_containers/etcd-amd64:3.1.10 - container_name: etcd - hostname: etcd - volumes: - - /etc/ssl/certs:/etc/ssl/certs - - /var/lib/etcd-cluster:/var/lib/etcd - ports: - - 4001:4001 - - 2380:2380 - - 2379:2379 - restart: always - command: ["sh", "-c", "etcd --name=K8SHA_ETCDNAME \ - --advertise-client-urls=http://K8SHA_IPLOCAL:2379,http://K8SHA_IPLOCAL:4001 \ - --listen-client-urls=http://0.0.0.0:2379,http://0.0.0.0:4001 \ - --initial-advertise-peer-urls=http://K8SHA_IPLOCAL:2380 \ - --listen-peer-urls=http://0.0.0.0:2380 \ - --initial-cluster-token=9477af68bbee1b9ae037d6fd9e7efefd \ - --initial-cluster=etcd1=http://K8SHA_IP1:2380,etcd2=http://K8SHA_IP2:2380,etcd3=http://K8SHA_IP3:2380 \ - --initial-cluster-state=new \ - --auto-tls \ - --peer-auto-tls \ - --data-dir=/var/lib/etcd"] diff --git a/v1.9/images/dashboard-login.png b/v1.9/images/dashboard-login.png deleted file mode 100644 index 72b197b..0000000 Binary files a/v1.9/images/dashboard-login.png and /dev/null differ diff --git a/v1.9/images/dashboard.png b/v1.9/images/dashboard.png deleted file mode 100644 index 0b3ceae..0000000 Binary files a/v1.9/images/dashboard.png and /dev/null differ diff --git a/v1.9/images/heapster-dashboard.png b/v1.9/images/heapster-dashboard.png deleted file mode 100644 index a2f4d2b..0000000 Binary files a/v1.9/images/heapster-dashboard.png and /dev/null differ diff --git a/v1.9/images/heapster.png b/v1.9/images/heapster.png deleted file mode 100644 index 23257bf..0000000 Binary files a/v1.9/images/heapster.png and /dev/null differ diff --git a/v1.9/keepalived/check_apiserver.sh b/v1.9/keepalived/check_apiserver.sh deleted file mode 100755 index 3ceb7a8..0000000 --- a/v1.9/keepalived/check_apiserver.sh +++ /dev/null @@ -1,24 +0,0 @@ -#!/bin/bash - -# if check error then repeat check for 12 times, else exit -err=0 -for k in $(seq 1 12) -do - check_code=$(ps -ef | grep kube-apiserver | grep -v color | grep -v grep | wc -l) - if [[ $check_code == "0" ]]; then - err=$(expr $err + 1) - sleep 5 - continue - else - err=0 - break - fi -done - -if [[ $err != "0" ]]; then - echo "systemctl stop keepalived" - /usr/bin/systemctl stop keepalived - exit 1 -else - exit 0 -fi diff --git a/v1.9/keepalived/keepalived.conf.tpl b/v1.9/keepalived/keepalived.conf.tpl deleted file mode 100644 index 52ae75a..0000000 --- a/v1.9/keepalived/keepalived.conf.tpl +++ /dev/null @@ -1,29 +0,0 @@ -! Configuration File for keepalived -global_defs { - router_id LVS_DEVEL -} -vrrp_script chk_apiserver { - script "/etc/keepalived/check_apiserver.sh" - interval 2 - weight -5 - fall 3 - rise 2 -} -vrrp_instance VI_1 { - state K8SHA_KA_STATE - interface K8SHA_KA_INTF - mcast_src_ip K8SHA_IPLOCAL - virtual_router_id 51 - priority K8SHA_KA_PRIO - advert_int 2 - authentication { - auth_type PASS - auth_pass K8SHA_KA_AUTH - } - virtual_ipaddress { - K8SHA_IPVIRTUAL - } - track_script { - chk_apiserver - } -} diff --git a/v1.9/kube-canal/canal.yaml.tpl b/v1.9/kube-canal/canal.yaml.tpl deleted file mode 100644 index f7942f1..0000000 --- a/v1.9/kube-canal/canal.yaml.tpl +++ /dev/null @@ -1,357 +0,0 @@ -# Canal Version v3.0.3 -# https://docs.projectcalico.org/v3.0/releases#v3.0.3 -# This manifest includes the following component versions: -# calico/node:v3.0.3 -# calico/cni:v2.0.1 -# coreos/flannel:v0.9.1 - -# This ConfigMap can be used to configure a self-hosted Canal installation. -kind: ConfigMap -apiVersion: v1 -metadata: - name: canal-config - namespace: kube-system -data: - # The interface used by canal for host <-> host communication. - # If left blank, then the interface is chosen using the node's - # default route. - canal_iface: "" - - # Whether or not to masquerade traffic to destinations not within - # the pod network. - masquerade: "true" - - # The CNI network configuration to install on each node. - cni_network_config: |- - { - "name": "k8s-pod-network", - "cniVersion": "0.3.0", - "plugins": [ - { - "type": "calico", - "log_level": "info", - "datastore_type": "kubernetes", - "nodename": "__KUBERNETES_NODE_NAME__", - "ipam": { - "type": "host-local", - "subnet": "usePodCidr" - }, - "policy": { - "type": "k8s", - "k8s_auth_token": "__SERVICEACCOUNT_TOKEN__" - }, - "kubernetes": { - "k8s_api_root": "https://__KUBERNETES_SERVICE_HOST__:__KUBERNETES_SERVICE_PORT__", - "kubeconfig": "__KUBECONFIG_FILEPATH__" - } - }, - { - "type": "portmap", - "capabilities": {"portMappings": true}, - "snat": true - } - ] - } - - # Flannel network configuration. Mounted into the flannel container. - net-conf.json: | - { - "Network": "K8SHA_CIDR", - "Backend": { - "Type": "vxlan" - } - } - ---- - -# This manifest installs the calico/node container, as well -# as the Calico CNI plugins and network config on -# each master and worker node in a Kubernetes cluster. -kind: DaemonSet -apiVersion: extensions/v1beta1 -metadata: - name: canal - namespace: kube-system - labels: - k8s-app: canal -spec: - selector: - matchLabels: - k8s-app: canal - updateStrategy: - type: RollingUpdate - rollingUpdate: - maxUnavailable: 1 - template: - metadata: - labels: - k8s-app: canal - annotations: - scheduler.alpha.kubernetes.io/critical-pod: '' - spec: - hostNetwork: true - serviceAccountName: canal - tolerations: - # Tolerate this effect so the pods will be schedulable at all times - - effect: NoSchedule - operator: Exists - # Mark the pod as a critical add-on for rescheduling. - - key: CriticalAddonsOnly - operator: Exists - - effect: NoExecute - operator: Exists - # Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force - # deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods. - terminationGracePeriodSeconds: 0 - containers: - # Runs calico/node container on each Kubernetes node. This - # container programs network policy and routes on each - # host. - - name: calico-node - image: quay.io/calico/node:v3.0.3 - env: - # Use Kubernetes API as the backing datastore. - - name: DATASTORE_TYPE - value: "kubernetes" - # Enable felix logging. - - name: FELIX_LOGSEVERITYSYS - value: "info" - # Don't enable BGP. - - name: CALICO_NETWORKING_BACKEND - value: "none" - # Cluster type to identify the deployment type - - name: CLUSTER_TYPE - value: "k8s,canal" - # Disable file logging so `kubectl logs` works. - - name: CALICO_DISABLE_FILE_LOGGING - value: "true" - # Period, in seconds, at which felix re-applies all iptables state - - name: FELIX_IPTABLESREFRESHINTERVAL - value: "60" - # Disable IPV6 support in Felix. - - name: FELIX_IPV6SUPPORT - value: "false" - # Wait for the datastore. - - name: WAIT_FOR_DATASTORE - value: "true" - # No IP address needed. - - name: IP - value: "autodetect" - - name: IP_AUTODETECTION_METHOD - value: "can-reach=K8SHA_CALICO_REACHABLE_IP" - - name: NODENAME - valueFrom: - fieldRef: - fieldPath: spec.nodeName - # Set Felix endpoint to host default action to ACCEPT. - - name: FELIX_DEFAULTENDPOINTTOHOSTACTION - value: "ACCEPT" - - name: FELIX_HEALTHENABLED - value: "true" - securityContext: - privileged: true - resources: - requests: - cpu: 250m - livenessProbe: - httpGet: - path: /liveness - port: 9099 - periodSeconds: 10 - initialDelaySeconds: 10 - failureThreshold: 6 - readinessProbe: - httpGet: - path: /readiness - port: 9099 - periodSeconds: 10 - volumeMounts: - - mountPath: /lib/modules - name: lib-modules - readOnly: true - - mountPath: /var/run/calico - name: var-run-calico - readOnly: false - # This container installs the Calico CNI binaries - # and CNI network config file on each node. - - name: install-cni - image: quay.io/calico/cni:v2.0.1 - command: ["/install-cni.sh"] - env: - - name: CNI_CONF_NAME - value: "10-calico.conflist" - # The CNI network config to install on each node. - - name: CNI_NETWORK_CONFIG - valueFrom: - configMapKeyRef: - name: canal-config - key: cni_network_config - - name: KUBERNETES_NODE_NAME - valueFrom: - fieldRef: - fieldPath: spec.nodeName - volumeMounts: - - mountPath: /host/opt/cni/bin - name: cni-bin-dir - - mountPath: /host/etc/cni/net.d - name: cni-net-dir - # This container runs flannel using the kube-subnet-mgr backend - # for allocating subnets. - - name: kube-flannel - image: quay.io/coreos/flannel:v0.9.1-amd64 - command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr" ] - securityContext: - privileged: true - env: - - name: POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: POD_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - - name: FLANNELD_IFACE - valueFrom: - configMapKeyRef: - name: canal-config - key: canal_iface - - name: FLANNELD_IP_MASQ - valueFrom: - configMapKeyRef: - name: canal-config - key: masquerade - volumeMounts: - - name: run - mountPath: /run - - name: flannel-cfg - mountPath: /etc/kube-flannel/ - volumes: - # Used by calico/node. - - name: lib-modules - hostPath: - path: /lib/modules - - name: var-run-calico - hostPath: - path: /var/run/calico - # Used to install CNI. - - name: cni-bin-dir - hostPath: - path: /opt/cni/bin - - name: cni-net-dir - hostPath: - path: /etc/cni/net.d - # Used by flannel. - - name: run - hostPath: - path: /run - - name: flannel-cfg - configMap: - name: canal-config - - -# Create all the CustomResourceDefinitions needed for -# Calico policy-only mode. ---- - -apiVersion: apiextensions.k8s.io/v1beta1 -description: Calico Felix Configuration -kind: CustomResourceDefinition -metadata: - name: felixconfigurations.crd.projectcalico.org -spec: - scope: Cluster - group: crd.projectcalico.org - version: v1 - names: - kind: FelixConfiguration - plural: felixconfigurations - singular: felixconfiguration - ---- - -apiVersion: apiextensions.k8s.io/v1beta1 -description: Calico BGP Configuration -kind: CustomResourceDefinition -metadata: - name: bgpconfigurations.crd.projectcalico.org -spec: - scope: Cluster - group: crd.projectcalico.org - version: v1 - names: - kind: BGPConfiguration - plural: bgpconfigurations - singular: bgpconfiguration - ---- - -apiVersion: apiextensions.k8s.io/v1beta1 -description: Calico IP Pools -kind: CustomResourceDefinition -metadata: - name: ippools.crd.projectcalico.org -spec: - scope: Cluster - group: crd.projectcalico.org - version: v1 - names: - kind: IPPool - plural: ippools - singular: ippool - ---- - -apiVersion: apiextensions.k8s.io/v1beta1 -description: Calico Cluster Information -kind: CustomResourceDefinition -metadata: - name: clusterinformations.crd.projectcalico.org -spec: - scope: Cluster - group: crd.projectcalico.org - version: v1 - names: - kind: ClusterInformation - plural: clusterinformations - singular: clusterinformation - ---- - -apiVersion: apiextensions.k8s.io/v1beta1 -description: Calico Global Network Policies -kind: CustomResourceDefinition -metadata: - name: globalnetworkpolicies.crd.projectcalico.org -spec: - scope: Cluster - group: crd.projectcalico.org - version: v1 - names: - kind: GlobalNetworkPolicy - plural: globalnetworkpolicies - singular: globalnetworkpolicy - ---- - -apiVersion: apiextensions.k8s.io/v1beta1 -description: Calico Network Policies -kind: CustomResourceDefinition -metadata: - name: networkpolicies.crd.projectcalico.org -spec: - scope: Namespaced - group: crd.projectcalico.org - version: v1 - names: - kind: NetworkPolicy - plural: networkpolicies - singular: networkpolicy - ---- - -apiVersion: v1 -kind: ServiceAccount -metadata: - name: canal - namespace: kube-system diff --git a/v1.9/kube-canal/rbac.yaml b/v1.9/kube-canal/rbac.yaml deleted file mode 100644 index 3997358..0000000 --- a/v1.9/kube-canal/rbac.yaml +++ /dev/null @@ -1,129 +0,0 @@ -# Calico Roles -# Pulled from https://docs.projectcalico.org/v2.5/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml -kind: ClusterRole -apiVersion: rbac.authorization.k8s.io/v1beta1 -metadata: - name: calico -rules: - - apiGroups: [""] - resources: - - namespaces - verbs: - - get - - list - - watch - - apiGroups: [""] - resources: - - pods/status - verbs: - - update - - apiGroups: [""] - resources: - - pods - verbs: - - get - - list - - watch - - patch - - apiGroups: [""] - resources: - - services - verbs: - - get - - apiGroups: [""] - resources: - - endpoints - verbs: - - get - - apiGroups: [""] - resources: - - nodes - verbs: - - get - - list - - update - - watch - - apiGroups: ["extensions"] - resources: - - networkpolicies - verbs: - - get - - list - - watch - - apiGroups: ["crd.projectcalico.org"] - resources: - - globalfelixconfigs - - felixconfigurations - - bgppeers - - globalbgpconfigs - - bgpconfigurations - - ippools - - globalnetworkpolicies - - networkpolicies - - clusterinformations - verbs: - - create - - get - - list - - update - - watch - ---- - -# Flannel roles -# Pulled from https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel-rbac.yml -kind: ClusterRole -apiVersion: rbac.authorization.k8s.io/v1beta1 -metadata: - name: flannel -rules: - - apiGroups: - - "" - resources: - - pods - verbs: - - get - - apiGroups: - - "" - resources: - - nodes - verbs: - - list - - watch - - apiGroups: - - "" - resources: - - nodes/status - verbs: - - patch ---- - -# Bind the flannel ClusterRole to the canal ServiceAccount. -kind: ClusterRoleBinding -apiVersion: rbac.authorization.k8s.io/v1beta1 -metadata: - name: canal-flannel -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: flannel -subjects: -- kind: ServiceAccount - name: canal - namespace: kube-system - ---- - -# Bind the calico ClusterRole to the canal ServiceAccount. -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRoleBinding -metadata: - name: canal-calico -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: calico -subjects: -- kind: ServiceAccount - name: canal - namespace: kube-system diff --git a/v1.9/kube-dashboard/kubernetes-dashboard.yaml b/v1.9/kube-dashboard/kubernetes-dashboard.yaml deleted file mode 100644 index 9094412..0000000 --- a/v1.9/kube-dashboard/kubernetes-dashboard.yaml +++ /dev/null @@ -1,192 +0,0 @@ -# Copyright 2017 The Kubernetes Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# Configuration to deploy release version of the Dashboard UI compatible with -# Kubernetes 1.8. -# -# Example usage: kubectl create -f - -# ------------------- Dashboard Secret ------------------- # - -apiVersion: v1 -kind: Secret -metadata: - labels: - k8s-app: kubernetes-dashboard - name: kubernetes-dashboard-certs - namespace: kube-system -type: Opaque - ---- -# ------------------- Dashboard Service Account ------------------- # - -apiVersion: v1 -kind: ServiceAccount -metadata: - labels: - k8s-app: kubernetes-dashboard - name: kubernetes-dashboard - namespace: kube-system - ---- -# ------------------- Dashboard Role & Role Binding ------------------- # - -kind: Role -apiVersion: rbac.authorization.k8s.io/v1 -metadata: - name: kubernetes-dashboard-minimal - namespace: kube-system -rules: - # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret. -- apiGroups: [""] - resources: ["secrets"] - verbs: ["create"] - # Allow Dashboard to create 'kubernetes-dashboard-settings' config map. -- apiGroups: [""] - resources: ["configmaps"] - verbs: ["create"] - # Allow Dashboard to get, update and delete Dashboard exclusive secrets. -- apiGroups: [""] - resources: ["secrets"] - resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"] - verbs: ["get", "update", "delete"] - # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map. -- apiGroups: [""] - resources: ["configmaps"] - resourceNames: ["kubernetes-dashboard-settings"] - verbs: ["get", "update"] - # Allow Dashboard to get metrics from heapster. -- apiGroups: [""] - resources: ["services"] - resourceNames: ["heapster"] - verbs: ["proxy"] -- apiGroups: [""] - resources: ["services/proxy"] - resourceNames: ["heapster", "http:heapster:", "https:heapster:"] - verbs: ["get"] - ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: RoleBinding -metadata: - name: kubernetes-dashboard-minimal - namespace: kube-system -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: Role - name: kubernetes-dashboard-minimal -subjects: -- kind: ServiceAccount - name: kubernetes-dashboard - namespace: kube-system - ---- -# ------------------- Dashboard Deployment ------------------- # - -kind: Deployment -apiVersion: apps/v1beta2 -metadata: - labels: - k8s-app: kubernetes-dashboard - name: kubernetes-dashboard - namespace: kube-system -spec: - replicas: 1 - revisionHistoryLimit: 10 - selector: - matchLabels: - k8s-app: kubernetes-dashboard - template: - metadata: - labels: - k8s-app: kubernetes-dashboard - spec: - nodeSelector: - node-role.kubernetes.io/master: "" - containers: - - name: kubernetes-dashboard - image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.8.3 - ports: - - containerPort: 8443 - protocol: TCP - args: - - --auto-generate-certificates - # Uncomment the following line to manually specify Kubernetes API server Host - # If not specified, Dashboard will attempt to auto discover the API server and connect - # to it. Uncomment only if the default does not work. - # - --apiserver-host=http://my-address:port - volumeMounts: - - name: kubernetes-dashboard-certs - mountPath: /certs - # Create on-disk volume to store exec logs - - mountPath: /tmp - name: tmp-volume - livenessProbe: - httpGet: - scheme: HTTPS - path: / - port: 8443 - initialDelaySeconds: 30 - timeoutSeconds: 30 - volumes: - - name: kubernetes-dashboard-certs - secret: - secretName: kubernetes-dashboard-certs - - name: tmp-volume - emptyDir: {} - serviceAccountName: kubernetes-dashboard - # Comment the following tolerations if Dashboard must not be deployed on master - tolerations: - - key: node-role.kubernetes.io/master - effect: NoSchedule - ---- -# ------------------- Dashboard Service ------------------- # - -kind: Service -apiVersion: v1 -metadata: - labels: - k8s-app: kubernetes-dashboard - name: kubernetes-dashboard - namespace: kube-system -spec: - type: NodePort - ports: - - port: 443 - targetPort: 8443 - nodePort: 30000 - selector: - k8s-app: kubernetes-dashboard - ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: admin-user - namespace: kube-system - ---- -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRoleBinding -metadata: - name: admin-user -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: cluster-admin -subjects: -- kind: ServiceAccount - name: admin-user - namespace: kube-system diff --git a/v1.9/kube-heapster/influxdb/grafana.yaml b/v1.9/kube-heapster/influxdb/grafana.yaml deleted file mode 100644 index 10c1417..0000000 --- a/v1.9/kube-heapster/influxdb/grafana.yaml +++ /dev/null @@ -1,75 +0,0 @@ -apiVersion: extensions/v1beta1 -kind: Deployment -metadata: - name: monitoring-grafana - namespace: kube-system -spec: - replicas: 1 - template: - metadata: - labels: - task: monitoring - k8s-app: grafana - spec: - nodeSelector: - node-role.kubernetes.io/master: "" - containers: - - name: grafana - image: gcr.io/google_containers/heapster-grafana-amd64:v4.4.3 - imagePullPolicy: IfNotPresent - ports: - - containerPort: 3000 - protocol: TCP - volumeMounts: - - mountPath: /etc/ssl/certs - name: ca-certificates - readOnly: true - - mountPath: /var - name: grafana-storage - env: - - name: INFLUXDB_HOST - value: monitoring-influxdb - - name: GF_SERVER_HTTP_PORT - value: "3000" - # The following env variables are required to make Grafana accessible via - # the kubernetes api-server proxy. On production clusters, we recommend - # removing these env variables, setup auth for grafana, and expose the grafana - # service using a LoadBalancer or a public IP. - - name: GF_AUTH_BASIC_ENABLED - value: "false" - - name: GF_AUTH_ANONYMOUS_ENABLED - value: "true" - - name: GF_AUTH_ANONYMOUS_ORG_ROLE - value: Admin - - name: GF_SERVER_ROOT_URL - # If you're only using the API Server proxy, set this value instead: - # value: /api/v1/namespaces/kube-system/services/monitoring-grafana/proxy - value: / - volumes: - - name: ca-certificates - hostPath: - path: /etc/ssl/certs - - name: grafana-storage - emptyDir: {} ---- -apiVersion: v1 -kind: Service -metadata: - labels: - # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons) - # If you are NOT using this as an addon, you should comment out this line. - kubernetes.io/cluster-service: 'true' - kubernetes.io/name: monitoring-grafana - name: monitoring-grafana - namespace: kube-system -spec: - # In a production setup, we recommend accessing Grafana through an external Loadbalancer - # or through a public IP. - # type: LoadBalancer - # You could also use NodePort to expose the service at a randomly-generated port - # type: NodePort - ports: - - port: 80 - targetPort: 3000 - selector: - k8s-app: grafana diff --git a/v1.9/kube-heapster/influxdb/heapster.yaml b/v1.9/kube-heapster/influxdb/heapster.yaml deleted file mode 100644 index 21b96c7..0000000 --- a/v1.9/kube-heapster/influxdb/heapster.yaml +++ /dev/null @@ -1,48 +0,0 @@ -apiVersion: v1 -kind: ServiceAccount -metadata: - name: heapster - namespace: kube-system ---- -apiVersion: extensions/v1beta1 -kind: Deployment -metadata: - name: heapster - namespace: kube-system -spec: - replicas: 1 - template: - metadata: - labels: - task: monitoring - k8s-app: heapster - spec: - serviceAccountName: heapster - nodeSelector: - node-role.kubernetes.io/master: "" - containers: - - name: heapster - image: gcr.io/google_containers/heapster-amd64:v1.4.2 - imagePullPolicy: IfNotPresent - command: - - /heapster - - --source=kubernetes:https://kubernetes.default - - --sink=influxdb:http://monitoring-influxdb.kube-system.svc:8086 ---- -apiVersion: v1 -kind: Service -metadata: - labels: - task: monitoring - # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons) - # If you are NOT using this as an addon, you should comment out this line. - kubernetes.io/cluster-service: 'true' - kubernetes.io/name: Heapster - name: heapster - namespace: kube-system -spec: - ports: - - port: 80 - targetPort: 8082 - selector: - k8s-app: heapster diff --git a/v1.9/kube-heapster/influxdb/influxdb.yaml b/v1.9/kube-heapster/influxdb/influxdb.yaml deleted file mode 100644 index 7d83b06..0000000 --- a/v1.9/kube-heapster/influxdb/influxdb.yaml +++ /dev/null @@ -1,43 +0,0 @@ -apiVersion: extensions/v1beta1 -kind: Deployment -metadata: - name: monitoring-influxdb - namespace: kube-system -spec: - replicas: 1 - template: - metadata: - labels: - task: monitoring - k8s-app: influxdb - spec: - nodeSelector: - node-role.kubernetes.io/master: "" - containers: - - name: influxdb - image: gcr.io/google_containers/heapster-influxdb-amd64:v1.3.3 - imagePullPolicy: IfNotPresent - volumeMounts: - - mountPath: /data - name: influxdb-storage - volumes: - - name: influxdb-storage - emptyDir: {} ---- -apiVersion: v1 -kind: Service -metadata: - labels: - task: monitoring - # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons) - # If you are NOT using this as an addon, you should comment out this line. - kubernetes.io/cluster-service: 'true' - kubernetes.io/name: monitoring-influxdb - name: monitoring-influxdb - namespace: kube-system -spec: - ports: - - port: 8086 - targetPort: 8086 - selector: - k8s-app: influxdb diff --git a/v1.9/kube-heapster/rbac/heapster-rbac.yaml b/v1.9/kube-heapster/rbac/heapster-rbac.yaml deleted file mode 100644 index 6e63803..0000000 --- a/v1.9/kube-heapster/rbac/heapster-rbac.yaml +++ /dev/null @@ -1,12 +0,0 @@ -kind: ClusterRoleBinding -apiVersion: rbac.authorization.k8s.io/v1beta1 -metadata: - name: heapster -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: system:heapster -subjects: -- kind: ServiceAccount - name: heapster - namespace: kube-system diff --git a/v1.9/kube-ingress/configmap.yaml b/v1.9/kube-ingress/configmap.yaml deleted file mode 100644 index 08e9101..0000000 --- a/v1.9/kube-ingress/configmap.yaml +++ /dev/null @@ -1,7 +0,0 @@ -kind: ConfigMap -apiVersion: v1 -metadata: - name: nginx-configuration - namespace: ingress-nginx - labels: - app: ingress-nginx diff --git a/v1.9/kube-ingress/default-backend.yaml b/v1.9/kube-ingress/default-backend.yaml deleted file mode 100644 index 72a73ba..0000000 --- a/v1.9/kube-ingress/default-backend.yaml +++ /dev/null @@ -1,55 +0,0 @@ -apiVersion: extensions/v1beta1 -kind: Deployment -metadata: - name: default-http-backend - labels: - app: default-http-backend - namespace: ingress-nginx -spec: - replicas: 1 - template: - metadata: - labels: - app: default-http-backend - spec: - terminationGracePeriodSeconds: 60 - nodeSelector: - node-role.kubernetes.io/master: "" - containers: - - name: default-http-backend - # Any image is permissable as long as: - # 1. It serves a 404 page at / - # 2. It serves 200 on a /healthz endpoint - image: devops-reg.io/k8s/defaultbackend:1.4 - imagePullPolicy: IfNotPresent - livenessProbe: - httpGet: - path: /healthz - port: 8080 - scheme: HTTP - initialDelaySeconds: 30 - timeoutSeconds: 5 - ports: - - containerPort: 8080 - resources: - limits: - cpu: 10m - memory: 20Mi - requests: - cpu: 10m - memory: 20Mi ---- - -apiVersion: v1 -kind: Service -metadata: - name: default-http-backend - namespace: ingress-nginx - labels: - app: default-http-backend -spec: - ports: - - port: 80 - targetPort: 8080 - selector: - app: default-http-backend diff --git a/v1.9/kube-ingress/ingress-demo.yaml b/v1.9/kube-ingress/ingress-demo.yaml deleted file mode 100644 index c904761..0000000 --- a/v1.9/kube-ingress/ingress-demo.yaml +++ /dev/null @@ -1,49 +0,0 @@ ---- -apiVersion: v1 -kind: Service -metadata: - name: ingress-nginx - namespace: ingress-nginx -spec: - type: NodePort - ports: - - name: http - nodePort: 32000 - port: 80 - protocol: TCP - targetPort: 80 - - name: https - nodePort: 32001 - port: 443 - protocol: TCP - targetPort: 443 - selector: - app: ingress-nginx - ---- -apiVersion: extensions/v1beta1 -kind: Ingress -metadata: - name: ingress-demo - namespace: default -spec: - rules: - - host: devops-master-lb - http: - paths: - - backend: - serviceName: jenkins - servicePort: 80 - path: /jenkins - - backend: - serviceName: gitlab - servicePort: 80 - path: /gitlab - - backend: - serviceName: nexus - servicePort: 8081 - path: /nexus - - backend: - serviceName: inops-tomcat - servicePort: 8080 - path: /inops diff --git a/v1.9/kube-ingress/namespace.yaml b/v1.9/kube-ingress/namespace.yaml deleted file mode 100644 index 6878f0b..0000000 --- a/v1.9/kube-ingress/namespace.yaml +++ /dev/null @@ -1,4 +0,0 @@ -apiVersion: v1 -kind: Namespace -metadata: - name: ingress-nginx diff --git a/v1.9/kube-ingress/rbac.yaml b/v1.9/kube-ingress/rbac.yaml deleted file mode 100644 index 3018532..0000000 --- a/v1.9/kube-ingress/rbac.yaml +++ /dev/null @@ -1,133 +0,0 @@ -apiVersion: v1 -kind: ServiceAccount -metadata: - name: nginx-ingress-serviceaccount - namespace: ingress-nginx - ---- - -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRole -metadata: - name: nginx-ingress-clusterrole -rules: - - apiGroups: - - "" - resources: - - configmaps - - endpoints - - nodes - - pods - - secrets - verbs: - - list - - watch - - apiGroups: - - "" - resources: - - nodes - verbs: - - get - - apiGroups: - - "" - resources: - - services - verbs: - - get - - list - - watch - - apiGroups: - - "extensions" - resources: - - ingresses - verbs: - - get - - list - - watch - - apiGroups: - - "" - resources: - - events - verbs: - - create - - patch - - apiGroups: - - "extensions" - resources: - - ingresses/status - verbs: - - update - ---- - -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: Role -metadata: - name: nginx-ingress-role - namespace: ingress-nginx -rules: - - apiGroups: - - "" - resources: - - configmaps - - pods - - secrets - - namespaces - verbs: - - get - - apiGroups: - - "" - resources: - - configmaps - resourceNames: - # Defaults to "-" - # Here: "-" - # This has to be adapted if you change either parameter - # when launching the nginx-ingress-controller. - - "ingress-controller-leader-nginx" - verbs: - - get - - update - - apiGroups: - - "" - resources: - - configmaps - verbs: - - create - - apiGroups: - - "" - resources: - - endpoints - verbs: - - get - ---- - -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: RoleBinding -metadata: - name: nginx-ingress-role-nisa-binding - namespace: ingress-nginx -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: Role - name: nginx-ingress-role -subjects: - - kind: ServiceAccount - name: nginx-ingress-serviceaccount - namespace: ingress-nginx - ---- - -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRoleBinding -metadata: - name: nginx-ingress-clusterrole-nisa-binding -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: nginx-ingress-clusterrole -subjects: - - kind: ServiceAccount - name: nginx-ingress-serviceaccount - namespace: ingress-nginx diff --git a/v1.9/kube-ingress/tcp-services-configmap.yaml b/v1.9/kube-ingress/tcp-services-configmap.yaml deleted file mode 100644 index a963085..0000000 --- a/v1.9/kube-ingress/tcp-services-configmap.yaml +++ /dev/null @@ -1,5 +0,0 @@ -kind: ConfigMap -apiVersion: v1 -metadata: - name: tcp-services - namespace: ingress-nginx diff --git a/v1.9/kube-ingress/udp-services-configmap.yaml b/v1.9/kube-ingress/udp-services-configmap.yaml deleted file mode 100644 index 1870931..0000000 --- a/v1.9/kube-ingress/udp-services-configmap.yaml +++ /dev/null @@ -1,5 +0,0 @@ -kind: ConfigMap -apiVersion: v1 -metadata: - name: udp-services - namespace: ingress-nginx diff --git a/v1.9/kube-ingress/with-rbac.yaml b/v1.9/kube-ingress/with-rbac.yaml deleted file mode 100644 index 5dda4d6..0000000 --- a/v1.9/kube-ingress/with-rbac.yaml +++ /dev/null @@ -1,75 +0,0 @@ -apiVersion: extensions/v1beta1 -kind: Deployment -metadata: - name: nginx-ingress-controller - namespace: ingress-nginx -spec: - replicas: 1 - selector: - matchLabels: - app: ingress-nginx - template: - metadata: - labels: - app: ingress-nginx - annotations: - prometheus.io/port: '10254' - prometheus.io/scrape: 'true' - spec: - serviceAccountName: nginx-ingress-serviceaccount - initContainers: - - command: - - sh - - -c - - sysctl -w net.core.somaxconn=32768; sysctl -w net.ipv4.ip_local_port_range="1024 65535" - image: devops-reg.io/k8s/alpine:3.6 - imagePullPolicy: IfNotPresent - name: sysctl - securityContext: - privileged: true - nodeSelector: - node-role.kubernetes.io/master: "" - containers: - - name: nginx-ingress-controller - image: devops-reg.io/k8s/nginx-ingress-controller:0.10.2 - imagePullPolicy: IfNotPresent - args: - - /nginx-ingress-controller - - --default-backend-service=$(POD_NAMESPACE)/default-http-backend - - --configmap=$(POD_NAMESPACE)/nginx-configuration - - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services - - --udp-services-configmap=$(POD_NAMESPACE)/udp-services - - --annotations-prefix=nginx.ingress.kubernetes.io - env: - - name: POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: POD_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - ports: - - name: http - containerPort: 80 - - name: https - containerPort: 443 - livenessProbe: - failureThreshold: 3 - httpGet: - path: /healthz - port: 10254 - scheme: HTTP - initialDelaySeconds: 10 - periodSeconds: 10 - successThreshold: 1 - timeoutSeconds: 1 - readinessProbe: - failureThreshold: 3 - httpGet: - path: /healthz - port: 10254 - scheme: HTTP - periodSeconds: 10 - successThreshold: 1 - timeoutSeconds: 1 diff --git a/v1.9/kubeadm-init.yaml.tpl b/v1.9/kubeadm-init.yaml.tpl deleted file mode 100644 index 3f95f77..0000000 --- a/v1.9/kubeadm-init.yaml.tpl +++ /dev/null @@ -1,22 +0,0 @@ -apiVersion: kubeadm.k8s.io/v1alpha1 -kind: MasterConfiguration -kubernetesVersion: v1.9.1 -networking: - podSubnet: K8SHA_CIDR - serviceSubnet: K8SHA_SVC_CIDR -apiServerCertSANs: -- K8SHA_HOSTNAME1 -- K8SHA_HOSTNAME2 -- K8SHA_HOSTNAME3 -- K8SHA_IP1 -- K8SHA_IP2 -- K8SHA_IP3 -- K8SHA_IPVIRTUAL -- 127.0.0.1 -etcd: - endpoints: - - http://K8SHA_IP1:2379 - - http://K8SHA_IP2:2379 - - http://K8SHA_IP3:2379 -token: K8SHA_TOKEN -tokenTTL: "0" diff --git a/v1.9/nginx-lb/docker-compose.yaml b/v1.9/nginx-lb/docker-compose.yaml deleted file mode 100644 index 72048d7..0000000 --- a/v1.9/nginx-lb/docker-compose.yaml +++ /dev/null @@ -1,11 +0,0 @@ -version: '2' -services: - etcd: - image: nginx:latest - container_name: nginx-lb - hostname: nginx-lb - volumes: - - ./nginx-lb.conf:/etc/nginx/nginx.conf - ports: - - 16443:16443 - restart: always diff --git a/v1.9/nginx-lb/nginx-lb.conf.tpl b/v1.9/nginx-lb/nginx-lb.conf.tpl deleted file mode 100644 index 5367e91..0000000 --- a/v1.9/nginx-lb/nginx-lb.conf.tpl +++ /dev/null @@ -1,46 +0,0 @@ -user nginx; -worker_processes 1; - -error_log /var/log/nginx/error.log warn; -pid /var/run/nginx.pid; - - -events { - worker_connections 1024; -} - - -http { - include /etc/nginx/mime.types; - default_type application/octet-stream; - - log_format main '$remote_addr - $remote_user [$time_local] "$request" ' - '$status $body_bytes_sent "$http_referer" ' - '"$http_user_agent" "$http_x_forwarded_for"'; - - access_log /var/log/nginx/access.log main; - - sendfile on; - #tcp_nopush on; - - keepalive_timeout 65; - - #gzip on; - - include /etc/nginx/conf.d/*.conf; -} - -stream { - upstream apiserver { - server K8SHA_IP1:6443 weight=5 max_fails=3 fail_timeout=30s; - server K8SHA_IP2:6443 weight=5 max_fails=3 fail_timeout=30s; - server K8SHA_IP3:6443 weight=5 max_fails=3 fail_timeout=30s; - } - - server { - listen 16443; - proxy_connect_timeout 1s; - proxy_timeout 3s; - proxy_pass apiserver; - } -}