First we need to disable both SELinux and swap. Issue the following commands:
setenforce 0 sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
Next, disable swap with the following command:
swapoff -a
We must also ensure that swap isn't re-enabled during a reboot on each server. Open up the /etc/fstab and comment out the swap entry like this:
# /dev/mapper/centos-swap swap swap defaults 0 0
Enable the br_netfilter kernel module. This is done with the following commands:
modprobe br_netfilter echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables
Install the Docker-ce dependencies with the following command:
yum install -y yum-utils device-mapper-persistent-data lvm2
Next, add the Docker-ce repository with the command:
yum-config-manager --add-repo https://gist.githubusercontent.com/stolsma/12457f4db016a86fea631fecee419989/raw/ef412fc4e282a3d83fa216b0be53757ecf1edf37/docker-ce.repo
Install Docker-ce with the command:
yum install -y docker-ce
First we need to create a repository entry for yum. To do this, issue the following command :
yum-config-manager --add-repo https://gist.githubusercontent.com/stolsma/12457f4db016a86fea631fecee419989/raw/ef412fc4e282a3d83fa216b0be53757ecf1edf37/kubernetes.repo
Install Kubernetes with the command:
yum install -y kubelet kubeadm kubectl
Once the installation completes, reboot the machine.
Now we need to ensure that both Docker-ce and Kubernetes belong to the same control group (cgroup). By default, Docker should already belong to cgroupfs (you can check this with the command docker info | grep -i cgroup). To add Kubernetes to this, issue the command:
sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Restart the systemd daemon and the kubelet service with the commands:
systemctl daemon-reload systemctl restart kubelet
We're now ready to initialize the Kubernetes cluster. This is done on kubemaster (and only on that machine). On kubemaster, issue the command (again, adjusting the IP addresses to fit your needs):
kubeadm init --apiserver-advertise-address=<MASTER_IP> --pod-network-cidr=<POD_NETWORK>/<POD_NETWORK_SUBNET_BITS>
When this completes (it'll take anywhere from 30 seconds to 5 minutes), the output should include the joining command for your nodes.
Once that completes, head over to kube2 and issue the command (adjusting the IP address to fit your needs):
kubeadm join <MASTER_IP>:6443 --token TOKEN --discovery-token-ca-cert-hash DISCOVERY_TOKEN
Where TOKEN and DISCOVERY_TOKEN are the tokens displayed after the initialization command completes.
Before Kubernetes can be used, we must take care of a bit of configuration. Issue the following three commands (to create a new .kube configuration directory, copy the necessary configuration file, and give the file the proper ownership):
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
Now we must deploy the flannel network to the cluster with the command:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Once the deploy command completes, you should be able to see both nodes on the master, by issuing the command kubectl get nodes
Congratulations, you now have a Kubernetes cluster ready for pods.
sudo kubeadm reset