Kubernetes Setup

Naveen Singh
7 min readNov 5, 2022

I was attempting to set up a Kubernetes cluster with kubeadm and read the official documentation, some great blogs, and a YouTube video to do so. I started following the steps and BOOM, the steps were contradicting my theoretical knowledge and handicapping my brain.

Is this the case with you? Don’t worry, Rohit. I’ll answer all of your questions.

What is kubeadm?

Kubeadm is a tool built to provide kubeadm init and kubeadm join as best-practice "fast paths" for creating Kubernetes clusters.

  • using kubeadm, it is easy to connect master and worker node that makes our cluster.

Why Kubeadm?

Because I wanted to use it.

KUBEADM

The Prerequisite

Kubernetes has its own demand or prerequisite to install that are

  • Minimum RAM: 2GB
  • Minimum Processor: 2
  • Disable swap memory
  • set SELinux to permissive
  • kernel modules “overlay” and “br_netfilter
  • systemctl params (ip tables stuff)
  • a container runtime to run pods
  • Kubernetes repo to install components
  • Unique Hostname of nodes (control and worker)
  • setup /etc/hosts
  • firewall rules if firewall is active
  • a person to setup all this(Heavy komedy)

The Setup

For ease, First I launched a single virtual machine and clone it after fulfilling all prerequisite.

  • O.S. → Rocky Linux✅
  • RAM ✅
  • Processor ✅
  • to disable swap memory ✅
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
sudo swapoff -a
free -m
Image 1
  • Image 1 shows value of swap equals 0, and the first line is commented using sed command. It can be done manually too by editing the /etc/fstab file.
  • set SELinux to permissive ✅
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

sestatus
Image 2
  • In Image 2, check the mode line.
  • kernel modules “overlay” and “br_netfilter” ✅
#the Kubernetes required the kernel modules
#"overlay" and "br_netfilter" to be enabled on all servers.
#This will let the iptbales see bridged traffics
sudo modprobe overlay
sudo modprobe br_netfilter
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system
  • A container runtime(CRI) ✅
sudo dnf config-manager --add-repo=https://download.docker.com/linux/centos/docker-ce.reposudo dnf install docker-ce -y
sudo systemctl start docker && sudo systemctl enable docker
  • For containerd as a runtime → K8S has removed docker as default CRI. In our case kubelet by default select containerd as CRI(it gets installed with docker). In containerd, systemd cgroup comes as false which we have to change by doing this:
sudo mv /etc/containerd/config.toml /etc/containerd/config.toml.bkpsudo containerd config default > /etc/containerd/config.tomlsudo vim /etc/containerd/config.toml # do the same as in image 3
Make it true (Image 3)
  • change SystemdCgroup to true (did in image 3) then
sudo systemctl restart containerd
  • If you are planning to use docker as a container runtime then(K8S by default use containerd as a container runtime that gets installed with docker), check for cgroup driver name also. Simply do sudo docker info |grep -i cgroup . If it matches the name of your service manager then fine else a daemon needs to be created. In my case it was already systemd so it was optional for me.
sudo vim /etc/docker/daemon.json{
"exec-opts": ["native.cgroupdriver=systemd"]
}

sudo systemctl restart docker
  • Kubernetes repo to install components ✅
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF
sudo dnf install -y kubelet kubeadm kubectl --disableexcludes=kubernetessudo systemctl enable --now kubelet

Currently, I have one VM and the next step demands unique hostname for master node and worker nodes.

Now, what I’m going to do is to clone the above machine that make our life more easy as we don’t have to follow the same step for worker nodes. As I need only one worker node I’ll make a single clone but you can make as many as you want.

Image 4
  • In oracle virtual box, clone can be made easily.
Image 5
  • MAC address policy is the important point and the new machine should have different IP.
Image 6
  • Control-node hostname is “kube-master” and IP is 192.168.1.13
Image 7
  • Worker node hostname is “wn1” and IP is 192.168.1.14
  • To set custom hostname use sudo hostnamectl set-hostname bobthebuilder
  • They both got their unique hostname ✅
  • Now, entry in /etc/hosts
Image 8
Image 9
  • In image 8 and image 9, both control and worker node’s /etc/hosts has been updated with IP & hostname of nodes in cluster. WHY?
  • This is due to Kubernetes can ping the worker node by their hostname and vice-versa. Check Image 9 for the same. w4 is not in /etc/hosts.
Image 10
  • Firewall rules for control node
#For the Kubernetes control-plane, you need to open the following ports:
Protocol Direction Port Range Purpose Used By
-----------------------------------------------
TCP Inbound 6443 Kubernetes API server All
TCP Inbound 2379-2380 etcd server client API kube-apiserver, etcd
TCP Inbound 10250 Kubelet API Self, Control plane
TCP Inbound 10259 kube-scheduler Self
TCP Inbound 10257 kube-controller-manager Self
------------------------------------------------
sudo firewall-cmd --add-port=6443/tcp --permanent
sudo firewall-cmd --add-port=2379-2380/tcp --permanent
sudo firewall-cmd --add-port=10250/tcp --permanent
sudo firewall-cmd --add-port=10259/tcp --permanent
sudo firewall-cmd --add-port=10257/tcp --permanent

sudo firewall-cmd --reload
sudo firewall-cmd --list-all
  • Firewall rules for worker node
#For the Kubernetes worker nodes, you need to open the following ports:
Protocol Direction Port Range Purpose Used By
--------------------------------------------------
TCP Inbound 10250 Kubelet API Self, Control plane
TCP Inbound 30000-32767 NodePort Services† All
--------------------------------------------------
sudo firewall-cmd --add-port=10250/tcp --permanent
sudo firewall-cmd --add-port=30000-32767/tcp --permanent

sudo firewall-cmd --reload
sudo firewall-cmd --list-all
  • a person to setup all this(Heavy komedy)

Time for QnA

  • Q: In Kubernetes architecture container runtime are only present on worker node but here we cloned the VMs that means both control and worker nodes have docker installed. Why we did this? Why we need docker on control node? Why? Why? 😭
  • A: The reason is that kubeadm uses containers (pods) to deploy ETCD and the API server components. So for containers docker is needed.
  • Q: You also installed kubelet on control node where it is clearly mentioned that kubelet is the part of worker node. Again why?😡
  • A: On worker node, kubelet is responsible for deploying pods. In this case, we got control node components in form of pods that is why kubelet is needed on this node. It deploy the components and make sure every pod is running fine and that is also the reason we cannot stop it.
  • Q: Similarly kubectl is installed on worker node. Is it reasonable?
  • A: kubectl is a command use to interact with cluster. Generally, it should not be there but there is no harm if it is there.

Let’s continue the setup

  • after firewall are in place, we have to setup Container Network Plugins(CNI). CNI take care of subnets, routes and IP tables and there are lot’s of available in market. I’m using cilium(you can use other). This should be done on control node only.
CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/master/stable.txt)CLI_ARCH=amd64if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; ficurl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
  • Now, we have to setup the control-node components and for this we just have to pull the images on master node.
sudo kubeadm config images pull
  • Finally, the time has come where almighty ‘kubeadm’ will show its power(master node)
sudo kubeadm init
  • As your cluster initialized run the following as a non root user
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get pods --all-namespaces
  • To join the cluster, on worker node run
sudo kubeadm join 192.168.1.16:6443 --token <token>                  --discovery-token-ca-cert-hash <certs>
  • on master node kubectl cluster-info
  • install cilium for k8s by running on master:
cilium install
  • Now you are free to deploy your application and can make more worker nodes.
#on control node for demokubectl create deploy nginx-deployment --image=nginx
kubectl expose deployment nginx-deployment --port=80
Image 11
  • kubeadm reset is used to reset the cluster created by kubeadm init and kubeadm join .

Give a try to the this setup.

  • Getting started with k8s? check out my other blogs → LINK

References:

Thanks for reading!!!

--

--