Installation Simple Kubernetes Cluster on Ubuntu Server 20.04

Ach.Chusnul Chikam
5 min readOct 13, 2021

Kubernetes is a tool for orchestrating and managing Docker containers at scale on the on-prem servers or across hybrid cloud environments. Kubeadm is a tool provided with Kubernetes to help users install a production-ready Kubernetes cluster with best practices enforcement. This tutorial will demonstrate how one can install a Kubernetes Cluster on Ubuntu 20.04 with kubeadm.

When you deploy Kubernetes, you get a cluster. A Kubernetes cluster consists of a set of worker machines, called nodes, that run containerized applications. Every cluster has at least one worker node. The worker node(s) host the Pods that are the components of the application workload. The control plane manages the worker nodes and the Pods in the cluster. In production environments, the control plane usually runs across multiple computers and a cluster usually runs multiple nodes, providing fault-tolerance and high availability. Components of kubernetes cluster on this guide is looks like:

More about kubernetes components: https://kubernetes.io/docs/concepts/overview/components/

Prerequisites:

  • 2 or more Linux servers running Ubuntu 20.04
  • 3.75 GB or more of RAM
  • 2 CPUs or more
  • Internet Connection
  • Certain ports are open on your machines.
  • Swap disabled. You MUST disable swap for the kubelet to work properly.

My Environment:

Setup Kubernetes Ubuntu 20.04

Let’s take a look at the steps required to set up Kubernetes in Ubuntu 20.04. The steps look something like the following:

1. Config Nodes and Patches Ubuntu

Set passwordless and update server, execute on all nodes

ssh-keygen
ssh-copy-id -i ~/.ssh/id_rsa.pub stack@master
ssh-copy-id -i ~/.ssh/id_rsa.pub stack@worker1
ssh-copy-id -i ~/.ssh/id_rsa.pub stack@worker2
sudo apt-get update -y

2. Install Docker

Kubernetes is not a container runtime in itself. It provides container orchestration and management. For the underlying container runtime, we need to install this. Docker, of course, is the most widely used container runtime. It is what I am using in my lab build to setup Kubernetes Ubuntu 20.04.

sudo apt-get install -y wget gnupg-agent software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update -y
sudo apt-get install -y docker-ce docker-ce-cli containerd.io

3. Start and Enable Docker

Configure the Docker daemon, in particular, to use systemd for the management of the container’s cgroups

sudo mkdir /etc/docker
cat <<EOF | sudo tee /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF

Restart Docker and enable on boot

sudo systemctl enable --now docker
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo systemctl status docker

Set rootless for docker

sudo usermod -aG docker ${USER}
#login again for apply change
su - $USER
id
docker --version
docker ps

4. Disable SWAP

Disable swap

sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
sudo swapoff -a

Letting iptables see bridged traffic

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system

5. Install Kubectl and Kubernetes

Update the apt package index and install packages needed to use the Kubernetes apt repository:

sudo apt-get update -y
sudo apt-get install -y apt-transport-https ca-certificates curl

Download the Google Cloud public signing key:

sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg

Add the Kubernetes apt repository:

echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

Update apt package index, install kubelet, kubeadm and kubectl, and pin their version:

sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

6. Initialize the Cluster and Master Node

Now that the components are installed on our nodes, we can initialize the cluster and master node. To do that, run the following command on your node that is to be the master node of the Kubernetes cluster.

sudo kubeadm init

Configure kubectl using commands in the output:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Check cluster status:

kubectl cluster-info

7. Install Networking overlay

Kubernetes requires a networking overlay component. Here I am choosing to use the Weave networking overlay for my Kubernetes cluster. To implement the Weave network overlay, use the following command:

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
kubectl get node -o wide
kubectl get pod --all-namespaces

8. Join the Worker Nodes to Kubernetes Cluster

At this point, you should have a Kubernetes cluster with a single master node joined. We need to join the other worker nodes to the cluster. To do that, instead of use kubeadm inituse kubeadm jointo join master node. The join command that was given is used to add a worker node to the cluster.

sudo kubeadm join 192.168.1.234:6443 --token axwp1d.XXXXXXXXXX \
--discovery-token-ca-cert-hash sha256:XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX1e74

Operate the master node and verify that worker node joined the cluster

kubectl get nodes

9. Deploy application on cluster

We need to validate that our cluster is working by deploying an application. Follow the steps given below to create the above Deployment:

kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml

Run kubectl get deployments to check if the Deployment was created

Notice that the Deployment has created all three replicas, and all replicas are up-to-date (they contain the latest Pod template) and available.

To see the ReplicaSet (rs) created by the Deployment, run kubectl get rs. The output is similar to this:

To see the labels automatically generated for each Pod, run kubectl get pods --show-labels. The output is similar to:

The created ReplicaSet ensures that there are three nginx Pods.

See other content

References :

--

--

Ach.Chusnul Chikam

Cloud Consultant | RHCSA | CKA | AWS SAA | OpenStack Certified | OpenShift Certified | Google Cloud ACE | LinkedIn: https://www.linkedin.com/in/achchusnulchikam