Wednesday, July 3, 2024
HomeOperating SystemsUbuntuSetup Kubernetes Cluster on Ubuntu 18.04 using kubeadm

Setup Kubernetes Cluster on Ubuntu 18.04 using kubeadm

In this guide, I’ll take you through the steps to install and set up a working 3 node Kubernetes Cluster on Ubuntu 18.04 Bionic Beaver Linux.  Kubernetes is an open-source container-orchestration system used for automating deployment, management, and scaling of containerized applications.

In this article we share in details the process of

  • Preparing Kubernetes nodes
  • Setting repositories
  • Installing container runtime engine
  • Installing Kubernetes components
  • Bootstrapping your cluster
  • installing networking plugin for Kubernetes
  • Joining new worker nodes into the cluster

Let’s get started!

Step 1: Systems Diagram & Hostnames

This setup is based on the following diagram:

kubernetes 3 node cluster 2

Let’s configure system hostnames before we can proceed to next steps:

On Master Node:

Set hostname like below:

sudo hostnamectl set-hostname k8s-master.example.com

On Worker Node 01:

Set the hostname using hostamectl command line tool.

sudo hostnamectl set-hostname k8s-node-01.example.com

On Worker Node 02:

Also set hostname for Kubernetes worker node 02.

sudo hostnamectl set-hostname k8s-node-02.example.com

Once correct hostname has been configured on each host, populate on each node with the values configured.

$ sudo vim /etc/hosts
192.168.2.2 k8s-master.example.com k8s-master
192.168.2.3 k8s-node-01.example.com  k8s-node-01
192.168.2.4 k8s-node-02.example.com  k8s-node-02

Logout and back in to use new hostname configured.

exit||logout

Step 2: Update System & Add user

Before doing any Kubernetes specific configurations, let’s ensure all deps are satisfied. Here we will do a system update and create Kubernetes user.

Update system packages to the latest release on all nodes:

sudo apt update && sudo apt upgrade -y
[ -e /var/run/reboot-required ] && sudo reboot

Add user to manage Kubernetes cluster:

sudo useradd -s /bin/bash -m k8s-admin
sudo passwd k8s-admin
sudo usermod -aG sudo k8s-admin
echo "k8s-admin ALL=(ALL) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/k8s-admin

If you prefer entering sudo password when running sudo commands as k8s-admin user, then you can ignore the last line. You can test if no password prompt for sudo:

$ su - k8s-admin
k8s-admin@k8s-master:~$ sudo su -
root@k8s-master:~# exit

All looks good, let’s proceed to install Docker engine.

Step 3: Install Containerd Runtime Engine

Kubernetes requires container runtime engine to run containers. In this article we’ll use Containerd as container engine.

Ensure Docker is not installed. If so uninstall it.

sudo apt remove docker docker-engine docker.io

Install dependencies:

sudo apt update
sudo apt install apt-transport-https ca-certificates curl software-properties-common

Import Docker repository GPG key:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

Install docker:

sudo apt update && sudo apt install containerd.io

The service is started automatically.

$ systemctl status containerd.service
● containerd.service - containerd container runtime
   Loaded: loaded (/lib/systemd/system/containerd.service; enabled; vendor preset: enabled)
   Active: active (running) since Thu 2023-08-17 09:53:49 UTC; 13s ago
     Docs: https://containerd.io
  Process: 4265 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
 Main PID: 4270 (containerd)
    Tasks: 8
   CGroup: /system.slice/containerd.service
           └─4270 /usr/bin/containerd

Step 4: Install Kubernetes Components

Add official Kubernetes APT repository.

sudo tee /etc/apt/sources.list.d/kubernetes.list<<EOL
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOL

Then import GPG key:

curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

Update apt package index:

sudo apt update

Install Kubernetes Components

Install kubectl, kubelet, kubernetes-cni and kubeadm Kubernetes master components:

sudo apt install kubectl kubelet kubeadm kubernetes-cni

Confirm that all package binaries are present on the file system.

$ which kubelet
/usr/bin/kubelet

$ which kubeadm
/usr/bin/kubeadm

If swap is on, turn it off.

sudo swapoff -a
sudo sed -i.bak -r 's/(.+ swap .+)/#\1/' /etc/fstab

Load required modules.

sudo tee /etc/modules-load.d/containerd.conf <<EOF
overlay
br_netfilter
EOF

To load at runtime run:

sudo modprobe overlay
sudo modprobe br_netfilter

Also set sysctl configurations.

sudo tee /etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

Load the configurations.

sudo sysctl --system

Configure containerd and restart service

sudo mkdir -p /etc/containerd
sudo containerd config default|sudo tee /etc/containerd/config.toml
sudo systemctl restart containerd

Enable kubelet service

sudo systemctl enable kubelet

Step 5: Bootstrap Kubernetes Control Plane

All commands that will be executed on this section are meant to be run on the master node. Don’t execute any of the commands on Kubernetes worker nodes. Kubernetes Master components provide the cluster’s control plane – API Server, Scheduler, Controller Manager. They make global decisions about the cluster e.g scheduling and detecting and responding to cluster events.

Confirm containerd is working by pulling Kubernetes base images.

$ sudo kubeadm config images pull
[config/images] Pulled registry.k8s.io/kube-apiserver:v1.28.0
[config/images] Pulled registry.k8s.io/kube-controller-manager:v1.28.0
[config/images] Pulled registry.k8s.io/kube-scheduler:v1.28.0
[config/images] Pulled registry.k8s.io/kube-proxy:v1.28.0
[config/images] Pulled registry.k8s.io/pause:3.9
[config/images] Pulled registry.k8s.io/etcd:3.5.9-0
[config/images] Pulled registry.k8s.io/coredns/coredns:v1.10.1

If you have multiple CRI sockets, please use --cri-socket to select one:

# Containerd
sudo kubeadm config images pull --cri-socket unix:///run/containerd/containerd.sock

# For CRI-O
sudo kubeadm config images pull --cri-socket unix:///var/run/crio/crio.sock

Bootstrap without shared endpoint

To bootstrap a cluster without using DNS endpoint, run:

sudo sysctl -p
sudo kubeadm init \
  --pod-network-cidr=172.24.0.0/16 \
  --cri-socket unix:///run/containerd/containerd.sock

Bootstrap with shared endpoint (DNS name for control plane API)

Set cluster endpoint DNS name or add record to /etc/hosts file.

$ sudo vim /etc/hosts
172.29.20.5 k8s-cluster.neveropen.co.za

Create cluster:

sudo sysctl -p
sudo kubeadm init \
  --pod-network-cidr=172.24.0.0/16 \
  --upload-certs \
  --control-plane-endpoint=k8s-cluster.neveropen.co.za

If all goes well, you should get a success message with the instructions of what to do next:

---
Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.2.2:6443 --token 9y4vc8.h7jdjle1xdovrd0z --discovery-token-ca-cert-hash sha256:cff9d1444a56b24b4a8839ff3330ab7177065c90753ef3e4e614566695db273c

Configure Access for k8s-admin user on the Master server

Switch to k8s-adminand copy Kubernetes configuration file with cluster information.

sudo su - k8s-admin
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Validate deployment by checking cluster information.

$ kubectl cluster-info
Kubernetes control plane is running at https://192.168.2.2:6443
CoreDNS is running at https://192.168.2.2:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Step 6: Deploy POD Network to the Cluster

In this guide we’ll use Calico. You can choose any other supported network plugins.

Download operator and custom resource files. See releases page for latest version.

VERSION=v3.26.1
curl -O https://raw.githubusercontent.com/projectcalico/calico/${VERSION}/manifests/tigera-operator.yaml
curl -O https://raw.githubusercontent.com/projectcalico/calico/${VERSION}/manifests/custom-resources.yaml 

First, install the operator on your cluster.

k8s-admin@k8s-master:~$ kubectl create -f tigera-operator.yaml
namespace/tigera-operator created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created
serviceaccount/tigera-operator created
clusterrole.rbac.authorization.k8s.io/tigera-operator created
clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created
deployment.apps/tigera-operator created

Let’s also update Calico IP Pool.

sed -ie 's/192.168.0.0/172.24.0.0/g' custom-resources.yaml

Then apply the manifest configurations.

k8s-admin@k8s-master:~$  kubectl create -f custom-resources.yaml
installation.operator.tigera.io/default created
apiserver.operator.tigera.io/default created

Wait for Calico Pod to be in running state.

k8s-admin@k8s-master:~$ kubectl get pods --all-namespaces
NAMESPACE          NAME                                             READY   STATUS    RESTARTS   AGE
calico-apiserver   calico-apiserver-5476c484c4-rmmts                1/1     Running   0          37s
calico-apiserver   calico-apiserver-5476c484c4-z9phf                1/1     Running   0          37s
calico-system      calico-kube-controllers-6945f87bd4-2xwx4         1/1     Running   0          80s
calico-system      calico-node-tkwtv                                1/1     Running   0          80s
calico-system      calico-typha-65f575c9fc-xx8m8                    1/1     Running   0          81s
calico-system      csi-node-driver-kngdn                            2/2     Running   0          80s
kube-system        coredns-5dd5756b68-77x77                         1/1     Running   0          11m
kube-system        coredns-5dd5756b68-plpd6                         1/1     Running   0          11m
kube-system        etcd-k8s-master.example.com                      1/1     Running   0          11m
kube-system        kube-apiserver-k8s-master.example.com            1/1     Running   0          11m
kube-system        kube-controller-manager-k8s-master.example.com   1/1     Running   0          11m
kube-system        kube-proxy-bxq4x                                 1/1     Running   0          11m
kube-system        kube-scheduler-k8s-master.example.com            1/1     Running   0          11m
tigera-operator    tigera-operator-94d7f7696-xls25                  1/1     Running   0          3m20s

The node should now be Ready.

k8s-admin@k8s-master:~$ kubectl get nodes
NAME                     STATUS   ROLES           AGE   VERSION
k8s-master.example.com   Ready    control-plane   20m   v1.28.0

Step 7: Setup Kubernetes Worker Nodes

When Kubernetes cluster has been initialized and the master node is online, start Worker Nodes configuration. A node is a worker machine in Kubernetes, it may be a VM or physical machine.  Each node is managed by the master and has the services necessary to run pods – docker, kubelet, kube-proxy

1) Ensure Containerd runtime is installed (covered)

Use steps provided in this article on installing containerd

2) Add Kubernetes repository ( covered)

Ensure that repository for Kubernetes packages is added to the system.

3) Install Kubernetes components (Step 4)

Once you’ve added Kubernetes repository, install components using:

sudo apt install kubelet kubeadm kubectl kubernetes-cni

4) Join the Node to the Cluster:

Use the join command given after initializing Kubernetes cluster. You can print the commands using:

kubeadm token create --print-join-command

When done,  Check nodes status on the master:

k8s-admin@k8s-master:~$ kubectl get nodes
NAME          STATUS    ROLES     AGE       VERSION
k8s-master    Ready     master    35m       v1.11.0
k8s-node-01   Ready     <none>    2m        v1.11.0
k8s-node-02   Ready     <none>    1m        v1.11.0

On the two nodes, Weave Net should be configured.

root@k8s-node-01:~# ip ad | grep weave
6: weave: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue state UP group default qlen 1000
    inet 10.44.0.0/12 brd 10.47.255.255 scope global weave
9: vethwe-bridge@vethwe-datapath: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP group default

root@k8s-node-02:~# ip ad | grep weave
6: weave: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue state UP group default qlen 1000
    inet 10.47.0.0/12 brd 10.47.255.255 scope global weave
9: vethwe-bridge@vethwe-datapath: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP group default

Step 8: Test Kubernetes Deployment

Create a test namespace:

$ kubectl create namespace test
namespace/test created

Deploy test application

kubectl create deployment hello-node --image=registry.k8s.io/e2e-test-images/agnhost:2.39 -- /agnhost netexec --http-port=8080

View existing deployments:

$ kubectl get deployments
NAME         READY   UP-TO-DATE   AVAILABLE   AGE
hello-node   1/1     1            1           1m

See running pods

$ kubectl get pods
NAME                          READY     STATUS    RESTARTS   AGE
hello-node-5f76cf6ccf-br9b5   1/1       Running   0          1m

To clean up resources run:

kubectl delete deployment hello-node

Step 9: Configure kubectl completion

Enable shell autocompletion for kubectl commands.  kubectl includes autocompletion support, which can save a lot of typing!. To enable shell completion in your current session, run:

source <(kubectl completion bash)

To add kubectl autocompletion to your profile, so it is automatically loaded in future shells run:

echo "source <(kubectl completion bash)" >> ~/.bashrc

If you are using zsh edit the ~/.zshrc file and add the following code to enable kubectl autocompletion:

if [ $commands[kubectl] ]; then
source <(kubectl completion zsh)
fi

Or when using Oh-My-Zsh, edit the ~/.zshrc file and update the plugins= line to include the kubectl plugin.

source <(kubectl completion zsh)

Also relevant is Top command for container metrics

Kubernetes mastery training materials:

Conclusion

We have successfully deployed a 3 node Kubernetes cluster on Ubuntu 18.04 LTS servers. Our next guides will cover Kubernetes HA, Kubernetes Monitoring, How to configure external storage and more cool stuff. Stay tuned!.

Nicole Veronica Rubhabha
Nicole Veronica Rubhabha
A highly competent and organized individual DotNet developer with a track record of architecting and developing web client-server applications. Recognized as a personable, dedicated performer who demonstrates innovation, communication, and teamwork to ensure quality and timely project completion. Expertise in C#, ASP.Net, MVC, LINQ, EF 6, Web Services, SQL Server, MySql, Web development,
RELATED ARTICLES

Most Popular

Recent Comments