Wednesday, July 3, 2024
HomeOperating SystemsDebianInstall Kubernetes Cluster on Debian 12 (Bookworm) servers

Install Kubernetes Cluster on Debian 12 (Bookworm) servers

Kubernetes has become one of the widely adopted technologies ever since containerization gained popularity. Containerization is the packaging of an application with all the required dependencies in a single lightweight executable known as a container. Kubernetes works by distributing the workload across a server farm. It can automate the deployment, scaling, and management of containerized applications. In addition to that, there are other several features and benefits of using Kubernetes.

  • Self-Healing: It has a self-healing capability. This means that it continuously monitors the health of applications and automatically restarts or replaces failed containers. It is also capable of detecting and responding to node failures and rescheduling containers to healthy nodes to maintain application availability.
  • Service Discovery and Load Balancing: It has built-in service discovery and load balancing mechanisms which assigns a stable network IP and DNS name to a set of containers, making it easy to locate and communicate with internally. It also offers load balancing across containers within a service, distributing traffic efficiently.
  • Automated Deployments and Rollbacks: It also supports automated application deployments, ensuring consistent and reliable deployments across environments. Moreso, you can define deployment strategies, perform rolling updates to minimize downtime and roll back to previous versions whenever you encounter issues.
  • Container Orchestration: It makes it easy to manage containers. Allowing you to define and manage complex application deployments, handle container lifecycle operations (such as scaling, rolling updates, and rollbacks), and manage container networking and storage.
  • Scalability: Kubernetes makes it easy to perform horizontal scaling of applications. It can automatically scale the number of application instances based on demand, ensuring optimal resource utilization and handling increased traffic loads effectively.
  • High Availability: it has in-built features that ensure the high availability of applications. It is capable of automatically restarting failed containers, rescheduling them to healthy nodes, and distributing the application workload across multiple nodes, reducing the risk of application downtime.

There are many other features and benefits of this tool. Today, our main focus is on how to install Kubernetes Cluster on Debian 12 (Bookworm) servers.

Environment Setup

For this setup, we will have 3 nodes configured as shown below:

TASK HOSTNAME IP ADDRESS SPECIFICATION
Control Node master.neveropen.co.za 192.168.200.56 4GB RAM, 2 cores
Worker Node 1 worker1.neveropen.co.za 192.168.200.85 4GB RAM, 2 cores
Worker Node 2 worker2.neveropen.co.za 192.168.200.86 4GB RAM, 2 cores

The next this is to ensure that all the systems are up to date:

sudo apt update
sudo apt -y full-upgrade
[ -f /var/run/reboot-required ] && sudo reboot -f

#1. Install Kubeadm Bootstrapping tool

We will start by adding the Kubernetes repository to the system. First import the GPG key:

sudo apt install curl apt-transport-https -y
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmour -o /etc/apt/trusted.gpg.d/cgoogle.gpg

Add the repository with the command:

sudo tee /etc/apt/sources.list.d/kubernetes.list<<EOF
deb http://apt.kubernetes.io/ kubernetes-xenial main
# deb-src http://apt.kubernetes.io/ kubernetes-xenial main
EOF

Once added, update the APT package index:

sudo apt update

Now install the required tools, i.e kubectl, kubeadm and kubelet

sudo apt install wget curl vim git kubelet kubeadm kubectl -y
sudo apt-mark hold kubelet kubeadm kubectl

Verify the installation:

$ kubectl version --client && kubeadm version
Client Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.3", GitCommit:"25b4e43193bcda6c7328a6d147b1fb73a33f1598", GitTreeState:"clean", BuildDate:"2023-06-14T09:53:42Z", GoVersion:"go1.20.5", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v5.0.1
kubeadm version: &version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.3", GitCommit:"25b4e43193bcda6c7328a6d147b1fb73a33f1598", GitTreeState:"clean", BuildDate:"2023-06-14T09:52:26Z", GoVersion:"go1.20.5", Compiler:"gc", Platform:"linux/amd64"}

#2. Disable Swap and Enable Kernel Modules

When spinning a Kubernetes cluster with kubeadm, it is recommended to disable swap for several reasons such as stability, performance and resource management.

Here, we will disable all swaps from /proc/swaps with the command:

sudo swapoff -a 

Verify if it has been disabled:

$ free -h
               total        used        free      shared  buff/cache   available
Mem:           3.8Gi       851Mi       2.2Gi       7.0Mi       1.0Gi       3.0Gi
Swap:             0B          0B          0B

The next thing is to permanently disable it in /etc/fstab:

$ sudo vim /etc/fstab
#UUID=3589935bj3333-39u4bb-24234343	none	swap	sw	0	0

Once the line has been commented out, save the file and enable the required Kernel modules:

sudo tee /etc/modules-load.d/k8s.conf <<EOF
overlay
br_netfilter
EOF

Enable the modules with the command:

sudo modprobe overlay
sudo modprobe br_netfilter

Modify the below file as shown:

sudo tee /etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

Reload the settings:

sudo sysctl --system

#3. Install and Configure Container Runtime

When setting up a cluster, you need a container runtime installed on all the nodes. There are several supported container runtimes:

  • Containerd
  • CRI-O
  • Docker

Remember, you must only choose one preferred container runtime to be installed on all nodes from the below options.

Option 1: Using Containerd

It is possible to use Containerd as your runtime. First, install the required packages:

sudo apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates

Add the Docker repo to the system:

curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/debian.gpg
sudo add-apt-repository "deb [arch=$(dpkg --print-architecture)] https://download.docker.com/linux/debian $(lsb_release -cs) stable"

Now install containerd

sudo apt update
sudo apt install -y containerd.io

Configure containerd:

sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml

Restart the service:

sudo systemctl restart containerd
sudo systemctl enable containerd

Verify if it is running:

$ systemctl status containerd
● containerd.service - containerd container runtime
     Loaded: loaded (/lib/systemd/system/containerd.service; enabled; preset: enabled)
     Active: active (running) since Fri 2023-06-23 08:33:37 EDT; 4s ago
       Docs: https://containerd.io
   Main PID: 2801 (containerd)
      Tasks: 8
     Memory: 14.3M
        CPU: 65ms
     CGroup: /system.slice/containerd.service
             └─2801 /usr/bin/containerd
....

In case you want to use the systemd cgroup driver, you need to modify the /etc/containerd/config.toml and add the line plugins.cri.systemd_cgroup = true

You can also manually make the config using the link cgroup driver for kubelet

Option 2: Using CRI-O

Add the CRI-O repo to your system.

OS=Debian_11
CRIO_VERSION=1.27
echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /"|sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
echo "deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$CRIO_VERSION/$OS/ /"|sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$CRIO_VERSION.list
curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$CRIO_VERSION/$OS/Release.key | sudo apt-key add -
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | sudo apt-key add -

The latest release can be obtained from the Github release page. Once the repo has been added, update the APT package index:

sudo apt update

Install CRI-O and the required dependencies.

sudo apt install cri-o cri-o-runc

Start and enable the service:

sudo systemctl daemon-reload
sudo systemctl restart crio
sudo systemctl enable crio

Verify if the service is up:

$ systemctl status crio
● crio.service - Container Runtime Interface for OCI (CRI-O)
     Loaded: loaded (/lib/systemd/system/crio.service; enabled; preset: enabled)
     Active: active (running) since Fri 2023-06-23 09:18:32 EDT; 5s ago
       Docs: https://github.com/cri-o/cri-o
   Main PID: 10051 (crio)
      Tasks: 7
     Memory: 20.6M
        CPU: 148ms
     CGroup: /system.slice/crio.service
             └─10051 /usr/bin/crio

Option 3: Using Docker Runtime

It is also possible to use the Docker runtime when setting up a cluster with kubeadm. First, add the docker repo:

sudo apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/debian.gpg
sudo add-apt-repository "deb [arch=$(dpkg --print-architecture)] https://download.docker.com/linux/debian $(lsb_release -cs) stable"

Install the docker runtime:

sudo apt update
sudo apt install -y containerd.io docker-ce docker-ce-cli

Create the required directory:

sudo mkdir -p /etc/systemd/system/docker.service.d

Next, create the daemon.json file:

sudo tee /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF

Ensure the service is started and enabled:

sudo systemctl daemon-reload 
sudo systemctl restart docker
sudo systemctl enable docker

Now you will need a shim interface. Here, you can install the Mirantis cri-dockerd as in the guide below.

Here, the CRI socket will be at /run/cri-dockerd.sock. This is now what will be used when setting up our cluster.

#4. Initialize the Control Plane

Once the runtime has been installed on all the nodes, we will proceed and initialize the master node. Before that, ensure that the modules are correctly loaded:

$ lsmod | grep br_netfilter
br_netfilter           32768  0
bridge                311296  1 br_netfilter

Also, ensure that the kubelet has been installed:

sudo systemctl enable kubelet

The next thing is to pull all the required container images on the master node:

sudo kubeadm config images pull

If you have multiple runtimes installed, you can specify the desired one as shown:

# CRI-O
sudo kubeadm config images pull --cri-socket /var/run/crio/crio.sock

# Containerd
sudo kubeadm config images pull --cri-socket /run/containerd/containerd.sock

# Docker
sudo kubeadm config images pull --cri-socket /run/cri-dockerd.sock 

When initializing the master, there are several options you can use along with the kubeadm init command. These are:

  • –control-plane-endpoint: set the shared endpoint for all control-plane nodes. Can be DNS/IP
  • –pod-network-cidr: this sets a Pod network add-on CIDR
  • –cri-socket : this is used if have more than one container runtime to set runtime socket path
  • –apiserver-advertise-address: Set advertise address for this particular control-plane node’s API server

Here, I will demonstrate several bootstrap options.

Option 1: Bootstrap without shared endpoint

In case you do not have a DNS server for your cluster, you can easily bootstrap the master using the pod network,

sudo kubeadm init \
  --pod-network-cidr=192.168.89.0/16

Option 2: Bootstrap with shared endpoint/Load Balancer

This option can also be used when setting a multi-master Kubernetes cluster. Here, you need an endpoint for the cluster. You can use a Load balancer to set this up and point it to the control nodes IPs and port 6443. Once configured, you can then update your /etc/hosts file with the DNS name. For example

$ sudo vim /etc/hosts
192.168.200.88 k8sapi.neveropen.co.za

Now initialize the cluster:

sudo kubeadm init \
  --pod-network-cidr=192.168.89.0/16 \
  --upload-certs \
  --control-plane-endpoint=k8sapi.neveropen.co.za

Alternatively, you can also start the cluster without a Loadbalancer, with only one control node:

sudo kubeadm init \
  --pod-network-cidr=192.168.89.0/16 \
  --upload-certs \
  --control-plane-endpoint=master.neveropen.co.za:6443

In the above command, you can also specify your –cri-socket if you have multiple running:

Runtime Path to Unix domain socket
Docker /run/cri-dockerd.sock
containerd /run/containerd/containerd.sock
CRI-O /var/run/crio/crio.sock

For example with CRI-O:

sudo kubeadm init \
  --pod-network-cidr=192.168.89.0/16 \
  --cri-socket /var/run/crio/crio.sock \
  --upload-certs \
  --control-plane-endpoint=k8sapi.neveropen.co.za

Once the cluster is initialized, you will see this:

Install Kubernetes Cluster on Debian 12 Bookworm servers 1

Allow the port through the firewall:

sudo ufw allow 6443/tcp

Now we will configure kubectl by exporting the admin config:

mkdir -p $HOME/.kube
sudo cp -f /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Check the cluster information:

$ kubectl cluster-info
Kubernetes control plane is running at https://master.neveropen.co.za:6443
CoreDNS is running at https://master.neveropen.co.za:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

If you used the second option to bootstrap the cluster, you will have the option to add control nodes to the cluster. You can use the provided command.

For example:

sudo kubeadm join master.neveropen.co.za:6443 --token 1j03mh.u35ozlqia1wn843v \
	--discovery-token-ca-cert-hash sha256:7a2cc6eea0f189cb4ff7171f9bfb61c9618b0f292316f474f48db0db32b3cdfe \
	--control-plane --certificate-key ce87e2eb681b98348593adfdc182ec21b37248c8b635022003c1d841d7c4cb90

#5. Install and Configure the Network Plugin

The nodes will not be ready until you install the Network plugin. Here, we will use the Flannel network plugin, although you can still use a preferred one from the list of supported network plugins.

Download the manifest:

wget https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml

If you are using a custom pod-CIDR and not the default 10.244.0.0/16, make adjustments:

$ vim kube-flannel.yml
net-conf.json: |
    {
      "Network": "192.168.89.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }

Apply the manifest:

$ kubectl apply -f kube-flannel.yml
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

Check f the pods are running:

$ kubectl get pods -n kube-flannel
NAME                    READY   STATUS    RESTARTS   AGE
kube-flannel-ds-h9b7t   1/1     Running   0          12s

Now verify if the master node is ready:

$ kubectl get nodes -o wide
NAME                           STATUS   ROLES           AGE     VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION   CONTAINER-RUNTIME
master.neveropen.co.za   Ready    control-plane   5m23s   v1.27.3   192.168.200.56   <none>        Debian GNU/Linux 12 (bookworm)   6.1.0-9-amd64    cri-o://1.27.0

#6. Add worker nodes to the Cluster

Now you can proceed and add the worker nodes to your cluster. If you used the DNS option, add the record on all the worker nodes:

$ sudo vim /etc/hosts
192.168.200.88 k8sapi.neveropen.co.za

Now join the cluster using the provided command:

sudo kubeadm join master.neveropen.co.za:6443 --token 1j03mh.u35ozlqia1wn843v \
	--discovery-token-ca-cert-hash sha256:7a2cc6eea0f189cb4ff7171f9bfb61c9618b0f292316f474f48db0db32b3cdfe 

Sample Output:

[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

Now verify if the worker nodes have been added:

$ kubectl get nodes -o wide
NAME                            STATUS   ROLES           AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION   CONTAINER-RUNTIME
master.neveropen.co.za    Ready    control-plane   15m   v1.27.3   192.168.200.56   <none>        Debian GNU/Linux 12 (bookworm)   6.1.0-9-amd64    cri-o://1.27.0
worker1.neveropen.co.za   Ready    <none>          13s   v1.27.3   192.168.200.85   <none>        Debian GNU/Linux 12 (bookworm)   6.1.0-9-amd64    cri-o://1.27.0
worker2.neveropen.co.za   Ready    <none>          19s   v1.27.3   192.168.200.86   <none>        Debian GNU/Linux 12 (bookworm)   6.1.0-9-amd64    cri-o://1.27.0

#7. Using Kubernetes Cluster

Now we can test if the cluster is working correctly by deploying a simple application:

kubectl apply -f https://k8s.io/examples/pods/commands.yaml

Verify if the application is running:

$ kubectl get pods
NAME           READY   STATUS      RESTARTS   AGE
command-demo   0/1     Completed   0          8s

Install Kubernetes Dashboard (Optional)

You can install a dashboard to make it easier to manage the cluster. We have a detailed guide to help you achieve this.

Install Metrics Server (optional but recommended)

You can slo install the Metrics Server which collects metrics from the Summary API, exposed by Kubelet on each node.

Deploy Prometheus / Grafana Monitoring

To perform monitoring of the cluster using Prometheus & Grafana, you can get help in the below guide:

Persistent Storage Configuration ideas (Optional)

To persist data in the cluster, you can learn how to configure persistent storage on Kubernetes using the dedicated guides below:

Install Ingress Controller

You can configure an Ingress controller on your cluster by following the below guides:

Verdict

That marks the end of this guide. I hope you too enjoyed much as I did. There are many other dedicated guides on this site. Feel free to explore:

Nicole Veronica Rubhabha
Nicole Veronica Rubhabha
A highly competent and organized individual DotNet developer with a track record of architecting and developing web client-server applications. Recognized as a personable, dedicated performer who demonstrates innovation, communication, and teamwork to ensure quality and timely project completion. Expertise in C#, ASP.Net, MVC, LINQ, EF 6, Web Services, SQL Server, MySql, Web development,
RELATED ARTICLES

Most Popular

Recent Comments