First of all be root while doing this... sudo su
yum install qemu-kvm qemu-img virt-manager libvirt libvirt-python libvirt-client virt-install virt-viewer bridge-utils
systemctl start libvirtd
systemctl enable libvirtd
lsmod | grep kvm
If needed, install xwindows for use of graphical virt manager:
sudo yum groupinstall "GNOME Desktop" "Graphical Administration Tools"
sudo ln -sf /lib/systemd/system/runlevel5.target /etc/systemd/system/default.target
reboot`
Before Start creating VMs, let’s first create the bridge interface. Bridge interface is required if you want to access virtual machines from outside of your hypervisor network.
cd /etc/sysconfig/network-scripts/
cp ifcfg-eno1 ifcfg-br0
Edit the Interface file and set followings:
[root@ network-scripts]# vi ifcfg-eno1
TYPE=Ethernet
BOOTPROTO=static
DEVICE=eno1
ONBOOT=yes
BRIDGE=br0
Edit the Bridge file (ifcfg-br0) and set the followings:
[root@ network-scripts]# vi ifcfg-br0
TYPE=Bridge
BOOTPROTO=static
DEVICE=br0
ONBOOT=yes
Replace the IP address and DNS server details as per your setup.
Restart the network Service to enable the bridge interface.
systemctl restart network
Check the Bridge interface using below command:
ip addr show br0
First we need to disable both SELinux and swap. Issue the following commands:
setenforce 0
sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
Next, disable swap with the following command:
swapoff -a
We must also ensure that swap isn't re-enabled during a reboot on each server. Open up the /etc/fstab and comment out the swap entry like this:
# /dev/mapper/centos-swap swap swap defaults 0 0
vi /etc/fstab
Enable the br_netfilter kernel module. This is done with the following commands:
modprobe br_netfilter
echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables
Install the Docker-ce dependencies with the following command:
yum install -y yum-utils device-mapper-persistent-data lvm2
Next, add the Docker-ce repository with the command:
yum-config-manager --add-repo https://gist.githubusercontent.com/stolsma/12457f4db016a86fea631fecee419989/raw/ef412fc4e282a3d83fa216b0be53757ecf1edf37/docker-ce.repo
Install Docker-ce with the command:
yum install -y docker-ce
Start docker automatically on reboot and also now:
systemctl start docker
systemctl enable docker
First we need to create a repository entry for yum. To do this, issue the following command :
yum-config-manager --add-repo https://gist.githubusercontent.com/stolsma/12457f4db016a86fea631fecee419989/raw/ef412fc4e282a3d83fa216b0be53757ecf1edf37/kubernetes.repo
Install Kubernetes with the command:
yum install -y kubelet kubeadm kubectl
Once the installation completes, reboot the machine.
Now we need to ensure that both Docker-ce and Kubernetes belong to the same control group (cgroup). By default, Docker should already belong to cgroupfs (you can check this with the command docker info | grep -i cgroup). To add Kubernetes to this, issue the command:
sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Restart the systemd daemon and the kubelet service with the commands:
systemctl daemon-reload
systemctl restart kubelet
We're now ready to initialize the Kubernetes cluster. This is done on kubemaster (and only on that machine). On kubemaster, issue the command (again, adjusting the IP addresses to fit your needs):
kubeadm init --apiserver-advertise-address=<MASTER_IP> --pod-network-cidr=<POD_NETWORK>/<POD_NETWORK_SUBNET_BITS>
With flannel as CNI plugin:
kubeadm init --apiserver-advertise-address=<MASTER_IP> --pod-network-cidr=10.244.0.0/16
When this completes (it'll take anywhere from 30 seconds to 5 minutes), the output should include the joining command for your nodes.
Once that completes, head over to kube2 and issue the command (adjusting the IP address to fit your needs):
kubeadm join <MASTER_IP>:6443 --token TOKEN --discovery-token-ca-cert-hash DISCOVERY_TOKEN
Where TOKEN and DISCOVERY_TOKEN are the tokens displayed after the initialization command completes.
Before Kubernetes can be used, we must take care of a bit of configuration. Come out of root and issue the following three commands (to create a new .kube configuration directory, copy the necessary configuration file, and give the file the proper ownership):
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Now we must deploy the flannel network to the cluster with the command:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Once the deploy command completes, you should be able to see both nodes on the master, by issuing the command kubectl get nodes
Congratulations, you now have a Kubernetes cluster ready for pods.
sudo kubeadm reset