forked from codepipe/k8-setup
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathk8-Setup_centos.txt
193 lines (131 loc) · 6.29 KB
/
k8-Setup_centos.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
https://medium.com/containerum/4-ways-to-bootstrap-a-kubernetes-cluster-de0d5150a1e4
Steps to setup K8-cluster in centos Flavour (combination of worker nodes and master node is called as Kubernetes cluster)
========================================================================================================================
Initially check this and add the all 3 nodes
============================================
kubectl commands should only run in master node, not in worker nodes
====================================================================
1. sudo hostnamectl set-hostname k8master after this run this command sudo exec bash (run only in master node)
Sudo hostnamectl set-hostname k8node1. after this run this command sudo exec bash. (run only in worker node1)
Sudo hostnamectl set-hostname k8node2. after this run this command sudo exec bash. (run only in worker node2
2. go to vim /etc/hosts and add this in each node all 3 sub steps
example: 0.0.0.0 k8master.gonext.com k8master
example: 1.1.1.1 k8node1.gonext.com k8node1
example: 2.2.2.2. k8node2.gonext.com k8node2
This step because all the 3 nodes should communicate each other.
3. Pinging should do work and Enable ICMP V4
important:
create a filename with firewall.sh
and add the line no 35 to 57 inside the file
and run the firewall.sh file
with following command
bash firewall.sh
#!/bin/bash
set -x
firewall-cmd --permanent --add-port=6443/tcp
firewall-cmd --permanent --add-port=2379-2380/tcp
firewall-cmd --permanent --add-port=10250/tcp
firewall-cmd --permanent --add-port=10251/tcp
firewall-cmd --permanent --add-port=10252/tcp
firewall-cmd --permanent --add-port=10255/tcp
firewall-cmd --permanent --add-port=2181/tcp
firewall-cmd --permanent --add-port=2888/tcp
firewall-cmd --permanent --add-port=3888/tcp
firewall-cmd --permanent --add-port=9092/tcp
firewall-cmd --permanent --add-port=5473/tcp
firewall-cmd --permanent --add-port=30000-32267/tcp
firewall-cmd --permanent --add-port=179/tcp
firewall-cmd --permanent --add-port=4789/udp
firewall-cmd --permanent --add-port=8285/udp
firewall-cmd --permanent --add-port=8472/udp
firewall-cmd --zone=public --add-port=53/tcp --permanent
firewall-cmd --zone=public --add-port=53/udp --permanent
firewall-cmd --add-masquerade --permanent
firewall-cmd --reload
set +x
iptables -w -P FORWARD ACCEPT
modprobe overlay
modprobe br_netfilter
tee /etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sysctl --system
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF
4. Firewalld package should be installed in AWS Ubuntu OS, and Disabled.
install the package firewalld
yum install firewalld -y
systemctl status firewalld
systemctl start firewalld
5. SWAP memory will not support for K8, because cache memory will not required.
free -m
swappoff -a
6. SeLinux is disabled
sestatus
setenforce 0
sestatus
# setenforce 0
# sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
# echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables
modprobe br_netfilter
cho '1' > /proc/sys/net/bridge/bridge-nf-call-iptables
*****
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF
******
# firewall-cmd --permanent --add-port=6783/tcp
# firewall-cmd --permanent --add-port=10250/tcp
# firewall-cmd --permanent --add-port=10255/tcp
# firewall-cmd --permanent --add-port=30000-32767/tcp
# firewall-cmd --reload
11. Install docker and enable the services in all 3 nodes
Sudo yum install docker.io
Sudo systemctl start docker
Sudo systemctl enable docker
12. Install the Kubeadm, kubelet , kubectl in all 3 nodes
sudo yum install -y kubelet kubeadm kubectl
##sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
13. Check the version kubeadm
14. Start the kubelet and enable the services of kubelet
Sudo systemctl start kubelet
Sudo systemctl enable kubelet
Only in Master node-Run the following commands and network add ons plugins in master node only.
================================================================================================
#15. Add the environemnt details for Cgroup of dockers and kubelet for systems in a file kubernetes environmental file only in #master node
#file name: 10.kubeadm.conf
#path is: /etc/systemd/system/kubelet.service.d/10.kubeadm.conf
#Environment="cgroup-driver=systemd/cgroup-driver=cgroupfs"
systemctl restart docker && systemctl enable docker
systemctl restart kubelet && systemctl enable kubelet
16. sudo kubeadm init --apiserver-advertise-address **PROVIDE IP ADDRESS of master node** --pod-network-cidr=192.168.0.0/16
Or
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
17. kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
18. mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
19. Run kubectl get nodes. To verify all the 3 nodes are active or not.
20. Kubectl get pods --all-namespaces
21. Save the Generated token, with this token we have to join the worker nodes to master node
example: kubeadm join 172.31.45.220:6443 --token 4dpaz7.1d4qr7d06m46ta2n --discovery-token-ca-cert-hash sha256:94d0de44045b5a94507a3586ff68532fc7da62f45841c72d22622dbc9a5d23be
kubeadm init --apiserver-advertise-address=10.46.176.28 --pod-network-cidr=192.168.1.0/16
https://phoenixnap.com/kb/how-to-install-kubernetes-on-centos
https://www.tecmint.com/install-kubernetes-cluster-on-centos-7/