Skip to content

Instantly share code, notes, and snippets.

@stolsma
Last active August 7, 2022 16:19
Show Gist options
  • Select an option

  • Save stolsma/12457f4db016a86fea631fecee419989 to your computer and use it in GitHub Desktop.

Select an option

Save stolsma/12457f4db016a86fea631fecee419989 to your computer and use it in GitHub Desktop.

Revisions

  1. stolsma revised this gist Oct 24, 2018. No changes.
  2. stolsma revised this gist Oct 24, 2018. 1 changed file with 1 addition and 0 deletions.
    1 change: 1 addition & 0 deletions 2-Install kubectl on ubuntu for windows.md
    Original file line number Diff line number Diff line change
    @@ -43,6 +43,7 @@ chmod 700 get_helm.sh
    ```

    After installation run `helm init` and add RBAC service accounts for Tiller to run as:
    (see https://medium.com/@amimahloof/how-to-setup-helm-and-tiller-with-rbac-and-namespaces-34bf27f7d3c3 for RBAC)

    ```
    helm init
  3. stolsma revised this gist Oct 24, 2018. 1 changed file with 2 additions and 0 deletions.
    2 changes: 2 additions & 0 deletions 2-Install kubectl on ubuntu for windows.md
    Original file line number Diff line number Diff line change
    @@ -32,6 +32,8 @@ The admin.conf file gives the user superuser privileges over the cluster. This f

    ## Install Helm and Tiller

    Helm is the package manager for Kubernetes. It lets you define, install, and upgrade Kubernetes base applications. For more information about Helm, please the visit official website: https://helm.sh.

    To install Helm do:

    ```
  4. stolsma revised this gist Oct 24, 2018. 1 changed file with 10 additions and 1 deletion.
    11 changes: 10 additions & 1 deletion 2-Install kubectl on ubuntu for windows.md
    Original file line number Diff line number Diff line change
    @@ -40,4 +40,13 @@ chmod 700 get_helm.sh
    ./get_helm.sh
    ```

    After installation run `helm init`
    After installation run `helm init` and add RBAC service accounts for Tiller to run as:

    ```
    helm init
    kubectl create serviceaccount --namespace kube-system tiller
    kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
    kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
    helm init --service-account tiller --upgrade
    ```

  5. stolsma revised this gist Oct 22, 2018. 1 changed file with 5 additions and 0 deletions.
    5 changes: 5 additions & 0 deletions 2-Install kubectl on ubuntu for windows.md
    Original file line number Diff line number Diff line change
    @@ -17,7 +17,12 @@ sudo apt-get install -y kubectl
    In order to get a kubectl on some other computer (e.g. laptop) to talk to your cluster, you need to copy the administrator kubeconfig file from your master to your workstation like this:

    ```
    # To set config to set of configs:
    scp root@<master ip>:/etc/kubernetes/admin.conf .
    export KUBECONFIG=$KUBECONFIG:admin.conf
    kubectl get nodes
    # to use only a specific conf file do:
    kubectl --kubeconfig ./admin.conf get nodes
    ```

  6. stolsma revised this gist Oct 22, 2018. 1 changed file with 17 additions and 0 deletions.
    17 changes: 17 additions & 0 deletions 2-Install kubectl on ubuntu for windows.md
    Original file line number Diff line number Diff line change
    @@ -1,5 +1,7 @@
    # Install Kubectl and Helm on Ubuntu for Windows subsystem

    ## Install Kubectl

    To install kubectl on Ubuntu for Windows Subsystem do:

    ```
    @@ -10,6 +12,21 @@ sudo apt-get update
    sudo apt-get install -y kubectl
    ```

    ### Controlling your cluster from machines other than the master

    In order to get a kubectl on some other computer (e.g. laptop) to talk to your cluster, you need to copy the administrator kubeconfig file from your master to your workstation like this:

    ```
    scp root@<master ip>:/etc/kubernetes/admin.conf .
    kubectl --kubeconfig ./admin.conf get nodes
    ```

    Note: The example above assumes SSH access is enabled for root. If that is not the case, you can copy the admin.conf file to be accessible by some other user and scp using that other user instead.

    The admin.conf file gives the user superuser privileges over the cluster. This file should be used sparingly. For normal users, it’s recommended to generate an unique credential to which you whitelist privileges. You can do this with the kubeadm alpha phase `kubeconfig user --client-name <CN>` command. That command will print out a KubeConfig file to STDOUT which you should save to a file and distribute to your user. After that, whitelist privileges by using `kubectl create (cluster)rolebinding`.

    ## Install Helm and Tiller

    To install Helm do:

    ```
  7. stolsma revised this gist Oct 22, 2018. 1 changed file with 13 additions and 3 deletions.
    16 changes: 13 additions & 3 deletions 2-Install kubectl on ubuntu for windows.md
    Original file line number Diff line number Diff line change
    @@ -1,11 +1,21 @@
    # Install Kubectl on Ubuntu for Windows subsystem
    # Install Kubectl and Helm on Ubuntu for Windows subsystem

    Do:
    To install kubectl on Ubuntu for Windows Subsystem do:

    ```
    sudo apt-get update && sudo apt-get install -y apt-transport-https
    curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
    echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
    sudo apt-get update
    sudo apt-get install -y kubectl
    ```
    ```

    To install Helm do:

    ```
    curl https://raw.githubusercontent.com/helm/helm/master/scripts/get > get_helm.sh
    chmod 700 get_helm.sh
    ./get_helm.sh
    ```

    After installation run `helm init`
  8. stolsma revised this gist Oct 22, 2018. 1 changed file with 11 additions and 0 deletions.
    11 changes: 11 additions & 0 deletions 2-Install kubectl on ubuntu for windows.md
    Original file line number Diff line number Diff line change
    @@ -0,0 +1,11 @@
    # Install Kubectl on Ubuntu for Windows subsystem

    Do:

    ```
    sudo apt-get update && sudo apt-get install -y apt-transport-https
    curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
    echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
    sudo apt-get update
    sudo apt-get install -y kubectl
    ```
  9. stolsma revised this gist Oct 19, 2018. 1 changed file with 10 additions and 0 deletions.
    10 changes: 10 additions & 0 deletions 1-Install.md
    Original file line number Diff line number Diff line change
    @@ -106,6 +106,16 @@ firewall-cmd --permanent --add-port=10255/tcp
    firewall-cmd --reload
    ```

    Or load the correct firewall rules for Kubernetes workers in the firewall:

    ```
    firewall-cmd --permanent --add-port=10250/tcp
    firewall-cmd --permanent --add-port=10255/tcp
    firewall-cmd --permanent --add-port=30000-32767/tcp
    firewall-cmd --permanent --add-port=6783/tcp
    firewall-cmd --reload
    ```

    ### Enable br_netfilter
    Enable the br_netfilter kernel module. This is done with the following commands:

  10. stolsma revised this gist Oct 19, 2018. 1 changed file with 27 additions and 9 deletions.
    36 changes: 27 additions & 9 deletions 1-Install.md
    Original file line number Diff line number Diff line change
    @@ -2,6 +2,12 @@

    First of all be root while doing this... `sudo su`

    ### Set hostname

    Set the hostname:

    `hostnamectl set-hostname 'k8s-master'`

    ## Installing KVM

    ### Install KVM
    @@ -61,7 +67,7 @@ Check the Bridge interface using below command:

    ## Installing Docker and K8S

    ### Disable SELinux and swap
    ### Disable SELinux

    First we need to disable both SELinux and swap. Issue the following commands:

    @@ -70,6 +76,8 @@ setenforce 0
    sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
    ```

    ### Disable swap

    Next, disable swap with the following command:

    `swapoff -a`
    @@ -80,6 +88,24 @@ We must also ensure that swap isn't re-enabled during a reboot on each server. O

    `vi /etc/fstab`

    ### Configure the firewall

    At this moment just stop the firewall with:

    `systemctl stop firewalld`

    Or load the correct firewall rules for Kubernetes master in the firewall:

    ```
    firewall-cmd --permanent --add-port=6443/tcp
    firewall-cmd --permanent --add-port=2379-2380/tcp
    firewall-cmd --permanent --add-port=10250/tcp
    firewall-cmd --permanent --add-port=10251/tcp
    firewall-cmd --permanent --add-port=10252/tcp
    firewall-cmd --permanent --add-port=10255/tcp
    firewall-cmd --reload
    ```

    ### Enable br_netfilter
    Enable the br_netfilter kernel module. This is done with the following commands:

    @@ -112,14 +138,6 @@ systemctl start docker
    systemctl enable docker
    ```

    ### Configure the firewall

    At this moment just stop the firewall with:

    `systemctl stop firewalld'

    TODO: Set firewall rules for Kubernetes et al.

    ### Install Kubernetes

    First we need to create a repository entry for yum. To do this, issue the following command :
  11. stolsma revised this gist Oct 19, 2018. 1 changed file with 33 additions and 1 deletion.
    34 changes: 33 additions & 1 deletion 1-Install.md
    Original file line number Diff line number Diff line change
    @@ -169,10 +169,42 @@ When this completes (it'll take anywhere from 30 seconds to 5 minutes), the outp

    Once that completes, head over to kube2 and issue the command (adjusting the IP address to fit your needs):

    `kubeadm join <MASTER_IP>:6443 --token TOKEN --discovery-token-ca-cert-hash sha256:DISCOVERY_TOKEN_HASH`
    `kubeadm join <MASTER_IP>:6443 --token <TOKEN> --discovery-token-ca-cert-hash sha256:<DISCOVERY_TOKEN_HASH>`

    Where TOKEN and DISCOVERY_TOKEN_HASH are the tokens displayed after the initialization command completes. Those are only 24 hour valid!

    If you do not have the token, you can get it by running the following command on the master node:

    `kubeadm token list`

    The output is similar to this:

    ```
    TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
    8ewj1p.9r9hcjoqgajrj4gi 23h 2018-06-12T02:51:28Z authentication, The default bootstrap system:
    signing token generated by bootstrappers:
    'kubeadm init'. kubeadm:
    default-node-token
    ```

    By default, tokens expire after 24 hours. If you are joining a node to the cluster after the current token has expired, you can create a new token by running the following command on the master node:

    `kubeadm token create`

    The output is similar to this:

    `5didvk.d09sbcov8ph2amjw`

    If you don’t have the value of `--discovery-token-ca-cert-hash`, you can get it by running the following command chain on the master node:

    ```
    openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
    openssl dgst -sha256 -hex | sed 's/^.* //'
    ```
    The output is similar to this:

    `8cb2de97839780a412b93877f8507ad6c94f73add17d5d7058e91741c9d5ec78`

    ### Configuring Kubernetes

    Before Kubernetes can be used, we must take care of a bit of configuration. Come out of root and issue the following three commands (to create a new .kube configuration directory, copy the necessary configuration file, and give the file the proper ownership):
  12. stolsma revised this gist Oct 19, 2018. 1 changed file with 48 additions and 7 deletions.
    55 changes: 48 additions & 7 deletions 1-Install.md
    Original file line number Diff line number Diff line change
    @@ -96,7 +96,10 @@ Install the Docker-ce dependencies with the following command:

    Next, add the Docker-ce repository with the command:

    `yum-config-manager --add-repo https://gist.githubusercontent.com/stolsma/12457f4db016a86fea631fecee419989/raw/ef412fc4e282a3d83fa216b0be53757ecf1edf37/docker-ce.repo`
    ```
    yum-config-manager --add-repo https://gist.githubusercontent.com/stolsma/12457f4db016a86fea631fecee419989/raw/ef412fc4e282a3d83fa216b0be53757ecf1edf37/docker-ce.repo
    yum update
    ```

    Install Docker-ce with the command:

    @@ -109,21 +112,39 @@ systemctl start docker
    systemctl enable docker
    ```

    ### Configure the firewall

    At this moment just stop the firewall with:

    `systemctl stop firewalld'

    TODO: Set firewall rules for Kubernetes et al.

    ### Install Kubernetes

    First we need to create a repository entry for yum. To do this, issue the following command :

    `yum-config-manager --add-repo https://gist.githubusercontent.com/stolsma/12457f4db016a86fea631fecee419989/raw/ef412fc4e282a3d83fa216b0be53757ecf1edf37/kubernetes.repo`
    ```
    yum-config-manager --add-repo https://gist.githubusercontent.com/stolsma/12457f4db016a86fea631fecee419989/raw/ef412fc4e282a3d83fa216b0be53757ecf1edf37/kubernetes.repo
    yum update
    ```

    Install Kubernetes with the command:

    `yum install -y kubelet kubeadm kubectl`
    `yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes`

    and start the k8s kubelet deamon with:

    ```
    systemctl enable kubelet
    systemctl start kubelet
    ```

    Once the installation completes, reboot the machine.
    Once this part of the installation completes, you could reboot the machine. (TODO: but this is not needed?)

    ### Cgroup changes

    Now we need to ensure that both Docker-ce and Kubernetes belong to the same control group (cgroup). By default, Docker should already belong to cgroupfs (you can check this with the command `docker info | grep -i cgroup`). To add Kubernetes to this, issue the command:
    Now we need to ensure that both Docker-ce and Kubernetes belong to the same control group (cgroup). By default, Docker should already belong to cgroupfs (you can check this with the command `docker info | grep -i cgroup`). Add Kubernetes to this too, issue the command:

    `sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf`

    @@ -148,9 +169,9 @@ When this completes (it'll take anywhere from 30 seconds to 5 minutes), the outp

    Once that completes, head over to kube2 and issue the command (adjusting the IP address to fit your needs):

    `kubeadm join <MASTER_IP>:6443 --token TOKEN --discovery-token-ca-cert-hash DISCOVERY_TOKEN`
    `kubeadm join <MASTER_IP>:6443 --token TOKEN --discovery-token-ca-cert-hash sha256:DISCOVERY_TOKEN_HASH`

    Where TOKEN and DISCOVERY_TOKEN are the tokens displayed after the initialization command completes.
    Where TOKEN and DISCOVERY_TOKEN_HASH are the tokens displayed after the initialization command completes. Those are only 24 hour valid!

    ### Configuring Kubernetes

    @@ -164,10 +185,30 @@ sudo chown $(id -u):$(id -g) $HOME/.kube/config

    ### Deploy flannel network

    First set something in iptables to pass bridged IPv4 traffic to iptables’ chains

    `sysctl net.bridge.bridge-nf-call-iptables=1`

    Now we must deploy the flannel network to the cluster with the command:

    `kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml`

    ### Master Isolation

    By default, your cluster will not schedule pods on the master for security reasons. If you want to be able to schedule pods on the master, e.g. for a single-machine Kubernetes cluster for development, run:

    `kubectl taint nodes --all node-role.kubernetes.io/master-`

    With output looking something like:

    ```
    node "test-01" untainted
    taint "node-role.kubernetes.io/master:" not found
    taint "node-role.kubernetes.io/master:" not found
    ```

    This will remove the `node-role.kubernetes.io/master` taint from any nodes that have it, including the master node, meaning that the scheduler will then be able to schedule pods everywhere.

    ### Checking your nodes

    Once the deploy command completes, you should be able to see both nodes on the master, by issuing the command `kubectl get nodes`
  13. stolsma revised this gist Oct 18, 2018. 1 changed file with 38 additions and 22 deletions.
    60 changes: 38 additions & 22 deletions 1-Install.md
    Original file line number Diff line number Diff line change
    @@ -23,25 +23,31 @@ reboot`

    Before Start creating VMs, let’s first create the bridge interface. Bridge interface is required if you want to access virtual machines from outside of your hypervisor network.

    `cd /etc/sysconfig/network-scripts/`
    `cp ifcfg-eno1 ifcfg-br0`
    ```
    cd /etc/sysconfig/network-scripts/
    cp ifcfg-eno1 ifcfg-br0
    ```

    Edit the Interface file and set followings:

    `[root@ network-scripts]# vi ifcfg-eno1`
    `TYPE=Ethernet`
    `BOOTPROTO=static`
    `DEVICE=eno1`
    `ONBOOT=yes`
    `BRIDGE=br0`
    ```
    [root@ network-scripts]# vi ifcfg-eno1
    TYPE=Ethernet
    BOOTPROTO=static
    DEVICE=eno1
    ONBOOT=yes
    BRIDGE=br0
    ```

    Edit the Bridge file (ifcfg-br0) and set the followings:

    `[root@ network-scripts]# vi ifcfg-br0
    ```
    [root@ network-scripts]# vi ifcfg-br0
    TYPE=Bridge
    BOOTPROTO=static
    DEVICE=br0
    ONBOOT=yes`
    ONBOOT=yes
    ```

    Replace the IP address and DNS server details as per your setup.

    @@ -59,8 +65,10 @@ Check the Bridge interface using below command:

    First we need to disable both SELinux and swap. Issue the following commands:

    `setenforce 0`
    `sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux`
    ```
    setenforce 0
    sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
    ```

    Next, disable swap with the following command:

    @@ -75,8 +83,10 @@ We must also ensure that swap isn't re-enabled during a reboot on each server. O
    ### Enable br_netfilter
    Enable the br_netfilter kernel module. This is done with the following commands:

    `modprobe br_netfilter`
    `echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables`
    ```
    modprobe br_netfilter
    echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables
    ```

    ### Install Docker-ce

    @@ -94,8 +104,10 @@ Install Docker-ce with the command:

    Start docker automatically on reboot and also now:

    `systemctl start docker`
    `systemctl enable docker`
    ```
    systemctl start docker
    systemctl enable docker
    ```

    ### Install Kubernetes

    @@ -117,16 +129,18 @@ Now we need to ensure that both Docker-ce and Kubernetes belong to the same cont

    Restart the systemd daemon and the kubelet service with the commands:

    `systemctl daemon-reload`
    `systemctl restart kubelet`
    ```
    systemctl daemon-reload
    systemctl restart kubelet
    ```

    ### Initialize the Kubernetes cluster

    We're now ready to initialize the Kubernetes cluster. This is done on kubemaster (and only on that machine). On kubemaster, issue the command (again, adjusting the IP addresses to fit your needs):

    `kubeadm init --apiserver-advertise-address=<MASTER_IP> --pod-network-cidr=<POD_NETWORK>/<POD_NETWORK_SUBNET_BITS>`

    With flannel:
    With flannel as CNI plugin:

    `kubeadm init --apiserver-advertise-address=<MASTER_IP> --pod-network-cidr=10.244.0.0/16`

    @@ -142,9 +156,11 @@ Where TOKEN and DISCOVERY_TOKEN are the tokens displayed after the initializatio

    Before Kubernetes can be used, we must take care of a bit of configuration. Come out of root and issue the following three commands (to create a new .kube configuration directory, copy the necessary configuration file, and give the file the proper ownership):

    `mkdir -p $HOME/.kube`
    `sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config`
    `sudo chown $(id -u):$(id -g) $HOME/.kube/config`
    ```
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    ```

    ### Deploy flannel network

  14. stolsma revised this gist Oct 18, 2018. 1 changed file with 11 additions and 7 deletions.
    18 changes: 11 additions & 7 deletions 1-Install.md
    Original file line number Diff line number Diff line change
    @@ -6,16 +6,20 @@ First of all be root while doing this... `sudo su`

    ### Install KVM

    `yum install qemu-kvm qemu-img virt-manager libvirt libvirt-python libvirt-client virt-install virt-viewer bridge-utils`
    `systemctl start libvirtd`
    `systemctl enable libvirtd`
    `lsmod | grep kvm`
    ```
    yum install qemu-kvm qemu-img virt-manager libvirt libvirt-python libvirt-client virt-install virt-viewer bridge-utils
    systemctl start libvirtd
    systemctl enable libvirtd
    lsmod | grep kvm
    ```

    If needed, install xwindows for use of graphical virt manager:

    `sudo yum groupinstall "GNOME Desktop" "Graphical Administration Tools"`
    `sudo ln -sf /lib/systemd/system/runlevel5.target /etc/systemd/system/default.target`
    `reboot`
    ```
    sudo yum groupinstall "GNOME Desktop" "Graphical Administration Tools"
    sudo ln -sf /lib/systemd/system/runlevel5.target /etc/systemd/system/default.target
    reboot`
    ```

    Before Start creating VMs, let’s first create the bridge interface. Bridge interface is required if you want to access virtual machines from outside of your hypervisor network.

  15. stolsma revised this gist Oct 18, 2018. 1 changed file with 38 additions and 25 deletions.
    63 changes: 38 additions & 25 deletions 1-Install.md
    Original file line number Diff line number Diff line change
    @@ -1,33 +1,35 @@
    # Installing KVM, libvirt, Docker-CE and Kubernetes on Centos 7.x

    First of all be root while doing this... `sudo su`

    ## Installing KVM

    ### Install KVM

    `yum install qemu-kvm qemu-img virt-manager libvirt libvirt-python libvirt-client virt-install virt-viewer bridge-utils
    systemctl start libvirtd
    systemctl enable libvirtd
    lsmod | grep kvm`
    `yum install qemu-kvm qemu-img virt-manager libvirt libvirt-python libvirt-client virt-install virt-viewer bridge-utils`
    `systemctl start libvirtd`
    `systemctl enable libvirtd`
    `lsmod | grep kvm`

    If needed, install xwindows for use of graphical virt manager:

    `sudo yum groupinstall "GNOME Desktop" "Graphical Administration Tools"
    sudo ln -sf /lib/systemd/system/runlevel5.target /etc/systemd/system/default.target
    reboot`
    `sudo yum groupinstall "GNOME Desktop" "Graphical Administration Tools"`
    `sudo ln -sf /lib/systemd/system/runlevel5.target /etc/systemd/system/default.target`
    `reboot`

    Before Start creating VMs, let’s first create the bridge interface. Bridge interface is required if you want to access virtual machines from outside of your hypervisor network.

    `cd /etc/sysconfig/network-scripts/
    cp ifcfg-eno1 ifcfg-br0`
    `cd /etc/sysconfig/network-scripts/`
    `cp ifcfg-eno1 ifcfg-br0`

    Edit the Interface file and set followings:

    `[root@ network-scripts]# vi ifcfg-eno1
    TYPE=Ethernet
    BOOTPROTO=static
    DEVICE=eno1
    ONBOOT=yes
    BRIDGE=br0`
    `[root@ network-scripts]# vi ifcfg-eno1`
    `TYPE=Ethernet`
    `BOOTPROTO=static`
    `DEVICE=eno1`
    `ONBOOT=yes`
    `BRIDGE=br0`

    Edit the Bridge file (ifcfg-br0) and set the followings:

    @@ -53,8 +55,8 @@ Check the Bridge interface using below command:

    First we need to disable both SELinux and swap. Issue the following commands:

    `setenforce 0
    sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux`
    `setenforce 0`
    `sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux`

    Next, disable swap with the following command:

    @@ -64,11 +66,13 @@ We must also ensure that swap isn't re-enabled during a reboot on each server. O

    `# /dev/mapper/centos-swap swap swap defaults 0 0`

    `vi /etc/fstab`

    ### Enable br_netfilter
    Enable the br_netfilter kernel module. This is done with the following commands:

    `modprobe br_netfilter
    echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables`
    `modprobe br_netfilter`
    `echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables`

    ### Install Docker-ce

    @@ -84,6 +88,11 @@ Install Docker-ce with the command:

    `yum install -y docker-ce`

    Start docker automatically on reboot and also now:

    `systemctl start docker`
    `systemctl enable docker`

    ### Install Kubernetes

    First we need to create a repository entry for yum. To do this, issue the following command :
    @@ -104,15 +113,19 @@ Now we need to ensure that both Docker-ce and Kubernetes belong to the same cont

    Restart the systemd daemon and the kubelet service with the commands:

    `systemctl daemon-reload
    systemctl restart kubelet`
    `systemctl daemon-reload`
    `systemctl restart kubelet`

    ### Initialize the Kubernetes cluster

    We're now ready to initialize the Kubernetes cluster. This is done on kubemaster (and only on that machine). On kubemaster, issue the command (again, adjusting the IP addresses to fit your needs):

    `kubeadm init --apiserver-advertise-address=<MASTER_IP> --pod-network-cidr=<POD_NETWORK>/<POD_NETWORK_SUBNET_BITS>`

    With flannel:

    `kubeadm init --apiserver-advertise-address=<MASTER_IP> --pod-network-cidr=10.244.0.0/16`

    When this completes (it'll take anywhere from 30 seconds to 5 minutes), the output should include the joining command for your nodes.

    Once that completes, head over to kube2 and issue the command (adjusting the IP address to fit your needs):
    @@ -123,11 +136,11 @@ Where TOKEN and DISCOVERY_TOKEN are the tokens displayed after the initializatio

    ### Configuring Kubernetes

    Before Kubernetes can be used, we must take care of a bit of configuration. Issue the following three commands (to create a new .kube configuration directory, copy the necessary configuration file, and give the file the proper ownership):
    Before Kubernetes can be used, we must take care of a bit of configuration. Come out of root and issue the following three commands (to create a new .kube configuration directory, copy the necessary configuration file, and give the file the proper ownership):

    `mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config`
    `mkdir -p $HOME/.kube`
    `sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config`
    `sudo chown $(id -u):$(id -g) $HOME/.kube/config`

    ### Deploy flannel network

  16. stolsma revised this gist Oct 17, 2018. 1 changed file with 61 additions and 12 deletions.
    73 changes: 61 additions & 12 deletions 1-Install.md
    Original file line number Diff line number Diff line change
    @@ -1,6 +1,55 @@
    # Installing Docker-CE and Kubernetes on Centos 7.x
    # Installing KVM, libvirt, Docker-CE and Kubernetes on Centos 7.x

    ## Disable SELinux and swap
    ## Installing KVM

    ### Install KVM

    `yum install qemu-kvm qemu-img virt-manager libvirt libvirt-python libvirt-client virt-install virt-viewer bridge-utils
    systemctl start libvirtd
    systemctl enable libvirtd
    lsmod | grep kvm`

    If needed, install xwindows for use of graphical virt manager:

    `sudo yum groupinstall "GNOME Desktop" "Graphical Administration Tools"
    sudo ln -sf /lib/systemd/system/runlevel5.target /etc/systemd/system/default.target
    reboot`

    Before Start creating VMs, let’s first create the bridge interface. Bridge interface is required if you want to access virtual machines from outside of your hypervisor network.

    `cd /etc/sysconfig/network-scripts/
    cp ifcfg-eno1 ifcfg-br0`

    Edit the Interface file and set followings:

    `[root@ network-scripts]# vi ifcfg-eno1
    TYPE=Ethernet
    BOOTPROTO=static
    DEVICE=eno1
    ONBOOT=yes
    BRIDGE=br0`

    Edit the Bridge file (ifcfg-br0) and set the followings:

    `[root@ network-scripts]# vi ifcfg-br0
    TYPE=Bridge
    BOOTPROTO=static
    DEVICE=br0
    ONBOOT=yes`

    Replace the IP address and DNS server details as per your setup.

    Restart the network Service to enable the bridge interface.

    `systemctl restart network`

    Check the Bridge interface using below command:

    `ip addr show br0`

    ## Installing Docker and K8S

    ### Disable SELinux and swap

    First we need to disable both SELinux and swap. Issue the following commands:

    @@ -15,13 +64,13 @@ We must also ensure that swap isn't re-enabled during a reboot on each server. O

    `# /dev/mapper/centos-swap swap swap defaults 0 0`

    ## Enable br_netfilter
    ### Enable br_netfilter
    Enable the br_netfilter kernel module. This is done with the following commands:

    `modprobe br_netfilter
    echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables`

    ## Install Docker-ce
    ### Install Docker-ce

    Install the Docker-ce dependencies with the following command:

    @@ -35,7 +84,7 @@ Install Docker-ce with the command:

    `yum install -y docker-ce`

    ## Install Kubernetes
    ### Install Kubernetes

    First we need to create a repository entry for yum. To do this, issue the following command :

    @@ -47,7 +96,7 @@ Install Kubernetes with the command:

    Once the installation completes, reboot the machine.

    ## Cgroup changes
    ### Cgroup changes

    Now we need to ensure that both Docker-ce and Kubernetes belong to the same control group (cgroup). By default, Docker should already belong to cgroupfs (you can check this with the command `docker info | grep -i cgroup`). To add Kubernetes to this, issue the command:

    @@ -58,7 +107,7 @@ Restart the systemd daemon and the kubelet service with the commands:
    `systemctl daemon-reload
    systemctl restart kubelet`

    ## Initialize the Kubernetes cluster
    ### Initialize the Kubernetes cluster

    We're now ready to initialize the Kubernetes cluster. This is done on kubemaster (and only on that machine). On kubemaster, issue the command (again, adjusting the IP addresses to fit your needs):

    @@ -72,28 +121,28 @@ Once that completes, head over to kube2 and issue the command (adjusting the IP

    Where TOKEN and DISCOVERY_TOKEN are the tokens displayed after the initialization command completes.

    ## Configuring Kubernetes
    ### Configuring Kubernetes

    Before Kubernetes can be used, we must take care of a bit of configuration. Issue the following three commands (to create a new .kube configuration directory, copy the necessary configuration file, and give the file the proper ownership):

    `mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config`

    ## Deploy flannel network
    ### Deploy flannel network

    Now we must deploy the flannel network to the cluster with the command:

    `kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml`

    ## Checking your nodes
    ### Checking your nodes

    Once the deploy command completes, you should be able to see both nodes on the master, by issuing the command `kubectl get nodes`

    ## All ready
    ### All ready

    Congratulations, you now have a Kubernetes cluster ready for pods.

    ## Remove Kubernetes from your system
    ### Remove Kubernetes from your system

    `sudo kubeadm reset`
  17. stolsma revised this gist Oct 17, 2018. 1 changed file with 5 additions and 1 deletion.
    6 changes: 5 additions & 1 deletion 1-Install.md
    Original file line number Diff line number Diff line change
    @@ -92,4 +92,8 @@ Once the deploy command completes, you should be able to see both nodes on the m

    ## All ready

    Congratulations, you now have a Kubernetes cluster ready for pods.
    Congratulations, you now have a Kubernetes cluster ready for pods.

    ## Remove Kubernetes from your system

    `sudo kubeadm reset`
  18. stolsma revised this gist Oct 5, 2018. 1 changed file with 21 additions and 0 deletions.
    21 changes: 21 additions & 0 deletions 1-Install.md
    Original file line number Diff line number Diff line change
    @@ -1,5 +1,26 @@
    # Installing Docker-CE and Kubernetes on Centos 7.x

    ## Disable SELinux and swap

    First we need to disable both SELinux and swap. Issue the following commands:

    `setenforce 0
    sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux`

    Next, disable swap with the following command:

    `swapoff -a`

    We must also ensure that swap isn't re-enabled during a reboot on each server. Open up the `/etc/fstab` and comment out the swap entry like this:

    `# /dev/mapper/centos-swap swap swap defaults 0 0`

    ## Enable br_netfilter
    Enable the br_netfilter kernel module. This is done with the following commands:

    `modprobe br_netfilter
    echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables`

    ## Install Docker-ce

    Install the Docker-ce dependencies with the following command:
  19. stolsma revised this gist Oct 5, 2018. 1 changed file with 2 additions and 2 deletions.
    4 changes: 2 additions & 2 deletions 1-Install.md
    Original file line number Diff line number Diff line change
    @@ -59,9 +59,9 @@ Before Kubernetes can be used, we must take care of a bit of configuration. Issu
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config`

    Deploy flannel network
    ## Deploy flannel network

    ## Now we must deploy the flannel network to the cluster with the command:
    Now we must deploy the flannel network to the cluster with the command:

    `kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml`

  20. stolsma revised this gist Oct 5, 2018. 1 changed file with 52 additions and 3 deletions.
    55 changes: 52 additions & 3 deletions 1-Install.md
    Original file line number Diff line number Diff line change
    @@ -1,4 +1,6 @@
    # Install Docker-ce
    # Installing Docker-CE and Kubernetes on Centos 7.x

    ## Install Docker-ce

    Install the Docker-ce dependencies with the following command:

    @@ -12,7 +14,7 @@ Install Docker-ce with the command:

    `yum install -y docker-ce`

    # Install Kubernetes
    ## Install Kubernetes

    First we need to create a repository entry for yum. To do this, issue the following command :

    @@ -22,4 +24,51 @@ Install Kubernetes with the command:

    `yum install -y kubelet kubeadm kubectl`

    Once the installation completes, reboot the machine.
    Once the installation completes, reboot the machine.

    ## Cgroup changes

    Now we need to ensure that both Docker-ce and Kubernetes belong to the same control group (cgroup). By default, Docker should already belong to cgroupfs (you can check this with the command `docker info | grep -i cgroup`). To add Kubernetes to this, issue the command:

    `sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf`

    Restart the systemd daemon and the kubelet service with the commands:

    `systemctl daemon-reload
    systemctl restart kubelet`

    ## Initialize the Kubernetes cluster

    We're now ready to initialize the Kubernetes cluster. This is done on kubemaster (and only on that machine). On kubemaster, issue the command (again, adjusting the IP addresses to fit your needs):

    `kubeadm init --apiserver-advertise-address=<MASTER_IP> --pod-network-cidr=<POD_NETWORK>/<POD_NETWORK_SUBNET_BITS>`

    When this completes (it'll take anywhere from 30 seconds to 5 minutes), the output should include the joining command for your nodes.

    Once that completes, head over to kube2 and issue the command (adjusting the IP address to fit your needs):

    `kubeadm join <MASTER_IP>:6443 --token TOKEN --discovery-token-ca-cert-hash DISCOVERY_TOKEN`

    Where TOKEN and DISCOVERY_TOKEN are the tokens displayed after the initialization command completes.

    ## Configuring Kubernetes

    Before Kubernetes can be used, we must take care of a bit of configuration. Issue the following three commands (to create a new .kube configuration directory, copy the necessary configuration file, and give the file the proper ownership):

    `mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config`

    Deploy flannel network

    ## Now we must deploy the flannel network to the cluster with the command:

    `kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml`

    ## Checking your nodes

    Once the deploy command completes, you should be able to see both nodes on the master, by issuing the command `kubectl get nodes`

    ## All ready

    Congratulations, you now have a Kubernetes cluster ready for pods.
  21. stolsma renamed this gist Oct 5, 2018. 1 changed file with 2 additions and 2 deletions.
    4 changes: 2 additions & 2 deletions Install.md → 1-Install.md
    Original file line number Diff line number Diff line change
    @@ -1,4 +1,4 @@
    #Install Docker-ce
    # Install Docker-ce

    Install the Docker-ce dependencies with the following command:

    @@ -12,7 +12,7 @@ Install Docker-ce with the command:

    `yum install -y docker-ce`

    #Install Kubernetes
    # Install Kubernetes

    First we need to create a repository entry for yum. To do this, issue the following command :

  22. stolsma revised this gist Oct 5, 2018. 1 changed file with 25 additions and 0 deletions.
    25 changes: 25 additions & 0 deletions Install.md
    Original file line number Diff line number Diff line change
    @@ -0,0 +1,25 @@
    #Install Docker-ce

    Install the Docker-ce dependencies with the following command:

    `yum install -y yum-utils device-mapper-persistent-data lvm2`

    Next, add the Docker-ce repository with the command:

    `yum-config-manager --add-repo https://gist.githubusercontent.com/stolsma/12457f4db016a86fea631fecee419989/raw/ef412fc4e282a3d83fa216b0be53757ecf1edf37/docker-ce.repo`

    Install Docker-ce with the command:

    `yum install -y docker-ce`

    #Install Kubernetes

    First we need to create a repository entry for yum. To do this, issue the following command :

    `yum-config-manager --add-repo https://gist.githubusercontent.com/stolsma/12457f4db016a86fea631fecee419989/raw/ef412fc4e282a3d83fa216b0be53757ecf1edf37/kubernetes.repo`

    Install Kubernetes with the command:

    `yum install -y kubelet kubeadm kubectl`

    Once the installation completes, reboot the machine.
  23. stolsma created this gist Oct 5, 2018.
    83 changes: 83 additions & 0 deletions docker-ce.repo
    Original file line number Diff line number Diff line change
    @@ -0,0 +1,83 @@
    [docker-ce-stable]
    name=Docker CE Stable - $basearch
    baseurl=https://download.docker.com/linux/centos/7/$basearch/stable
    enabled=1
    gpgcheck=1
    gpgkey=https://download.docker.com/linux/centos/gpg

    [docker-ce-stable-debuginfo]
    name=Docker CE Stable - Debuginfo $basearch
    baseurl=https://download.docker.com/linux/centos/7/debug-$basearch/stable
    enabled=0
    gpgcheck=1
    gpgkey=https://download.docker.com/linux/centos/gpg

    [docker-ce-stable-source]
    name=Docker CE Stable - Sources
    baseurl=https://download.docker.com/linux/centos/7/source/stable
    enabled=0
    gpgcheck=1
    gpgkey=https://download.docker.com/linux/centos/gpg

    [docker-ce-edge]
    name=Docker CE Edge - $basearch
    baseurl=https://download.docker.com/linux/centos/7/$basearch/edge
    enabled=0
    gpgcheck=1
    gpgkey=https://download.docker.com/linux/centos/gpg

    [docker-ce-edge-debuginfo]
    name=Docker CE Edge - Debuginfo $basearch
    baseurl=https://download.docker.com/linux/centos/7/debug-$basearch/edge
    enabled=0
    gpgcheck=1
    gpgkey=https://download.docker.com/linux/centos/gpg

    [docker-ce-edge-source]
    name=Docker CE Edge - Sources
    baseurl=https://download.docker.com/linux/centos/7/source/edge
    enabled=0
    gpgcheck=1
    gpgkey=https://download.docker.com/linux/centos/gpg

    [docker-ce-test]
    name=Docker CE Test - $basearch
    baseurl=https://download.docker.com/linux/centos/7/$basearch/test
    enabled=0
    gpgcheck=1
    gpgkey=https://download.docker.com/linux/centos/gpg

    [docker-ce-test-debuginfo]
    name=Docker CE Test - Debuginfo $basearch
    baseurl=https://download.docker.com/linux/centos/7/debug-$basearch/test
    enabled=0
    gpgcheck=1
    gpgkey=https://download.docker.com/linux/centos/gpg

    [docker-ce-test-source]
    name=Docker CE Test - Sources
    baseurl=https://download.docker.com/linux/centos/7/source/test
    enabled=0
    gpgcheck=1
    gpgkey=https://download.docker.com/linux/centos/gpg

    [docker-ce-nightly]
    name=Docker CE Nightly - $basearch
    baseurl=https://download.docker.com/linux/centos/7/$basearch/nightly
    enabled=0
    gpgcheck=1
    gpgkey=https://download.docker.com/linux/centos/gpg

    [docker-ce-nightly-debuginfo]
    name=Docker CE Nightly - Debuginfo $basearch
    baseurl=https://download.docker.com/linux/centos/7/debug-$basearch/nightly
    enabled=0
    gpgcheck=1
    gpgkey=https://download.docker.com/linux/centos/gpg

    [docker-ce-nightly-source]
    name=Docker CE Nightly - Sources
    baseurl=https://download.docker.com/linux/centos/7/source/nightly
    enabled=0
    gpgcheck=1
    gpgkey=https://download.docker.com/linux/centos/gpg
    8 changes: 8 additions & 0 deletions kubernetes.repo
    Original file line number Diff line number Diff line change
    @@ -0,0 +1,8 @@
    [kubernetes]
    name=Kubernetes
    baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
    https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg