Wednesday, November 20, 2024
Google search engine
HomeGuest BlogsInstall Kubernetes Cluster on Rocky Linux 8 with Kubeadm & CRI-O

Install Kubernetes Cluster on Rocky Linux 8 with Kubeadm & CRI-O

.tdi_3.td-a-rec{text-align:center}.tdi_3 .td-element-style{z-index:-1}.tdi_3.td-a-rec-img{text-align:left}.tdi_3.td-a-rec-img img{margin:0 auto 0 0}@media(max-width:767px){.tdi_3.td-a-rec-img{text-align:center}}

In this guide, we shall be focusing on the installation of Kubernetes Cluster on Rocky Linux 8 with Kubeadm & CRI-O container runtime engine. It is no doubt that Kubernetes continues to transform how we deploy and manage business applications, at scale. Irregardless of the model used to deploy applications, whether manual or through CI/CD pipelines, Kubernetes still remains the best choice to handle orchestration, management, and scaling your applications.

For those who do not know, Kubernetes has deprecated dockershim which means they no longer support docker as the container runtime. It is for this reason we’ll be using CRI-O container runtime. CRI-O is a lightweight alternative to using Docker as the runtime for kubernetes. Before you throw tantrums, it is important to note Cri-o works similar to Docker. The way Developer build and deploy application images is not affected.

What is CRI-O?

CRI-O is an implementation of the Kubernetes CRI (Container Runtime Interface) to enable using OCI (Open Container Initiative) compatible runtimes. CRI-O allows Kubernetes to use any OCI-compliant runtime as the container runtime for running pods.

.tdi_2.td-a-rec{text-align:center}.tdi_2 .td-element-style{z-index:-1}.tdi_2.td-a-rec-img{text-align:left}.tdi_2.td-a-rec-img img{margin:0 auto 0 0}@media(max-width:767px){.tdi_2.td-a-rec-img{text-align:center}}

CRI-O supports OCI container images and can pull from any container registry. It is a lightweight alternative to using Docker, Moby or rkt as the runtime for Kubernetes.

CRI-O architecture pulled from official project’s website:

crio container runtime architecture

My Lab environment setup

I’ll deploy a Kubernetes cluster on Rocky Linux 8 set of nodes as outlined below:

  • 3 Control plane nodes (Masters)
  • 4 Worker node machines (Data plane)
  • Ansible version required 2.10+

But it is okay to setup a Kubernetes Cluster with 1 master node using the process demonstrated in this guide.

Virtual Machines and Specs

We can see all the running nodes in my lab environment using the virsh command since it is powered by Vanilla KVM:

[root@kvm-private-lab ~]# virsh list
 Id   Name                   State
--------------------------------------
 4    k8s-bastion-server     running
 17   k8s-master-01-server   running
 22   k8s-master-02-server   running
 26   k8s-master-03-server   running
 31   k8s-worker-01-server   running
 36   k8s-worker-02-server   running
 38   k8s-worker-03-server   running
 41   k8s-worker-04-server   running

The first machine in the list will be our bastion / workstation machine. This is where we’ll perform the installation of Kubernetes Cluster on Rocky Linux 8 servers.

My list of servers with hostnames and IP addresses:

Hostname IP Address Cluster Role
k8s-bastion.example.com 192.168.200.9 Bastion host
k8s-master-01.example.com 192.168.200.10 Master Node
k8s-master-02.example.com 192.168.200.11 Master Node
k8s-master-03.example.com 192.168.200.12 Worker Node
k8s-worker-01.example.com 192.168.200.13 Worker Node
k8s-worker-02.example.com 192.168.200.14 Worker Node
k8s-worker-03.example.com 192.168.200.15 Worker Node
k8s-worker-04.example.com 192.168.200.16 Worker Node
Kubernetes Cluster Machines

Each machine in my cluster has the following hardware specifications:

# Memory
$ free -h
              total        used        free      shared  buff/cache   available
Mem:          7.5Gi       169Mi       7.0Gi       8.0Mi       285Mi       7.1Gi

# Disk space
$ df -hT /
Filesystem     Type  Size  Used Avail Use% Mounted on
/dev/vda4      xfs    39G  2.0G   37G   5% /

# CPU Cores
$ egrep ^processor /proc/cpuinfo  | wc -l
2

DNS Settings

Set A records for the hostnames in your DNS server and optionally on /etc/hosts file in each cluster node in case DNS resolution fails.

DNS bind records creation sample:

; Create entries for the master nodes
k8s-master-01		IN	A	192.168.200.10
k8s-master-02		IN	A	192.168.200.11
k8s-master-03		IN	A	192.168.200.12

; Create entries for the worker nodes
k8s-worker-01		IN	A	192.168.200.13
k8s-worker-02		IN	A	192.168.200.14
k8s-worker-03		IN	A	192.168.200.15

;
; The Kubernetes cluster ControlPlaneEndpoint point these to the IP of the masters
k8s-endpoint	IN	A	192.168.200.10
k8s-endpoint	IN	A	192.168.200.11
k8s-endpoint	IN	A	192.168.200.12

Step 1: Prepare Bastion Server for Kubernetes installation

Install basic tools required for the setup on the Bastion / Workstation system:

### Ubuntu / Debian ###
sudo apt update
sudo apt install git wget curl vim bash-completion

### CentOS / RHEL / Fedora / Rocky Linux ###
sudo yum -y install git wget curl vim bash-completion

Install Ansible configuration management

With Python 3:

curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
python3 get-pip.py --user

With Python 2:

curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
python get-pip.py --user

Installing Ansible using pip

# Python 3
python3 -m pip install ansible --user

# Python 2
python -m pip install ansible --user

Check Ansible version after installation:

$ ansible --version
ansible [core 2.11.5]
  config file = None
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
  ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
  executable location = /usr/local/bin/ansible
  python version = 3.9.7 (default, Aug 30 2021, 00:00:00) [GCC 11.2.1 20210728 (Red Hat 11.2.1-1)]
  jinja version = 3.0.1
  libyaml = True

Update /etc/hosts file in your Bastion machine:

$ sudo vim /etc/hosts
192.168.200.9  k8s-bastion.example.com k8s-bastion

192.168.200.10 k8s-master-01.example.com k8s-master-01
192.168.200.11 k8s-master-02.example.com k8s-master-02
192.168.200.12 k8s-master-03.example.com k8s-master-03

192.168.200.13 k8s-worker-01.example.com k8s-worker-01
192.168.200.14 k8s-worker-02.example.com k8s-worker-02
192.168.200.15 k8s-worker-03.example.com k8s-worker-03
192.168.200.16 k8s-worker-04.example.com k8s-worker-04

Generate SSH keys:

$ ssh-keygen -t rsa -b 4096 -N ''
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:LwgX4oCWENqWAyW9oywAv9jTK+BEk4+XShgX0galBqE root@k8s-master-01.example.com
The key's randomart image is:
+---[RSA 4096]----+
|OOo              |
|B**.             |
|EBBo. .          |
|===+ . .         |
|=*+++ . S        |
|*=++.o . .       |
|=.o. .. . .      |
| o. .    .       |
|   .             |
+----[SHA256]-----+

Create SSH client configuration file

$ vim ~/.ssh/config
Host *
    UserKnownHostsFile /dev/null
    StrictHostKeyChecking no
    IdentitiesOnly yes
    ConnectTimeout 0
    ServerAliveInterval 30

Copy SSH keys to all Kubernetes cluster nodes

# Master Nodes
for host in k8s-master-0{1..3}; do
  ssh-copy-id root@$host
done

# Worker Nodes
for host in k8s-worker-0{1..4}; do
  ssh-copy-id root@$host
done

Step 2: Set correct hostname on all nodes

Login to each node in the cluster and configure correct hostname:

# Examples
# Master Node 01
sudo hostnamectl set-hostname k8s-master-01.example.com

# Worker Node 01
sudo hostnamectl set-hostname k8s-worker-01.example.com

Logout then back in to confirm the hostname is set correctly:

$ hostnamectl
   Static hostname: k8s-master-01.example.com
         Icon name: computer-vm
           Chassis: vm
        Machine ID: 7a7841970fc6fab913a02ca8ae57fe18
           Boot ID: 4388978e190a4be69eb640b31e12a63e
    Virtualization: kvm
  Operating System: Rocky Linux 8.4 (Green Obsidian)
       CPE OS Name: cpe:/o:rocky:rocky:8.4:GA
            Kernel: Linux 4.18.0-305.19.1.el8_4.x86_64
      Architecture: x86-64

For a cloud instance using cloud-init check out the guide below:

Step 3: Prepare Rocky Linux 8 servers for Kubernetes (Pre-reqs setup)

I wrote an Ansible role for doing the standard Kubernetes node preparation. The role contain the tasks to:

  • Install standard packages required to manage nodes
  • Setup standard system requirements – Disable Swap, Modify sysctl, Disable SELinux
  • Install and configure a container runtime of your Choice – cri-o, Docker, Containerd
  • Install the Kubernetes packages – kubelet, kubeadm and kubectl
  • Configure Firewalld on Kubernetes Master and Worker nodes – open all ports required

Clone my Ansible role that we’ll use to setup Kubernetes Requirements before kubeadm init to your Bastion machine:

git clone https://github.com/jmutai/k8s-pre-bootstrap.git

Switch to k8s-pre-bootstrap directory created from the clone process:

cd k8s-pre-bootstrap

Set hosts inventory correctly with your Kubernetes nodes. Here is my inventory list:

$ vim hosts
[k8snodes]
k8s-master-01
k8s-master-02
k8s-master-03
k8s-worker-01
k8s-worker-02
k8s-worker-03
k8s-worker-04

You should also update the variables in playbook file. The most important being:

  • Kubernetes version: k8s_version
  • Your timezone: timezone
  • Kubernetes CNI to use: k8s_cni
  • Container runtime: container_runtime
$ vim  k8s-prep.yml
---
- name: Setup Proxy
  hosts: k8snodes
  remote_user: root
  become: yes
  become_method: sudo
  #gather_facts: no
  vars:
    k8s_version: "1.21"                                  # Kubernetes version to be installed
    selinux_state: permissive                            # SELinux state to be set on k8s nodes
    timezone: "Africa/Nairobi"                           # Timezone to set on all nodes
    k8s_cni: calico                                      # calico, flannel
    container_runtime: cri-o                             # docker, cri-o, containerd
    configure_firewalld: true                            # true / false
    # Docker proxy support
    setup_proxy: false                                   # Set to true to configure proxy
    proxy_server: "proxy.example.com:8080"               # Proxy server address and port
    docker_proxy_exclude: "localhost,127.0.0.1"          # Addresses to exclude from proxy
  roles:
    - kubernetes-bootstrap

Check Playbook syntax:

$ ansible-playbook  --syntax-check -i hosts k8s-prep.yml

playbook: k8s-prep.yml

If your SSH private key has a passphrase then save it to prevent prompts at the time of executing playbook:

eval `ssh-agent -s` && ssh-add

Run the playbook to prepare your nodes:

$ ansible-playbook -i hosts k8s-prep.yml

If the servers are accessible execution should start immediately:

PLAY [Setup Proxy] ***********************************************************************************************************************************************************************************

TASK [Gathering Facts] *******************************************************************************************************************************************************************************
ok: [k8s-worker-02]
ok: [k8s-worker-01]
ok: [k8s-master-03]
ok: [k8s-master-01]
ok: [k8s-worker-03]
ok: [k8s-master-02]
ok: [k8s-worker-04]

TASK [kubernetes-bootstrap : Add the OS specific variables] ******************************************************************************************************************************************
ok: [k8s-master-01] => (item=/root/k8s-pre-bootstrap/roles/kubernetes-bootstrap/vars/RedHat8.yml)
ok: [k8s-master-02] => (item=/root/k8s-pre-bootstrap/roles/kubernetes-bootstrap/vars/RedHat8.yml)
ok: [k8s-master-03] => (item=/root/k8s-pre-bootstrap/roles/kubernetes-bootstrap/vars/RedHat8.yml)
ok: [k8s-worker-01] => (item=/root/k8s-pre-bootstrap/roles/kubernetes-bootstrap/vars/RedHat8.yml)
ok: [k8s-worker-02] => (item=/root/k8s-pre-bootstrap/roles/kubernetes-bootstrap/vars/RedHat8.yml)
ok: [k8s-worker-03] => (item=/root/k8s-pre-bootstrap/roles/kubernetes-bootstrap/vars/RedHat8.yml)
ok: [k8s-worker-04] => (item=/root/k8s-pre-bootstrap/roles/kubernetes-bootstrap/vars/RedHat8.yml)

TASK [kubernetes-bootstrap : Put SELinux in permissive mode] *****************************************************************************************************************************************
changed: [k8s-master-01]
changed: [k8s-worker-01]
changed: [k8s-master-03]
changed: [k8s-master-02]
changed: [k8s-worker-02]
changed: [k8s-worker-03]
changed: [k8s-worker-04]

TASK [kubernetes-bootstrap : Update system packages] *************************************************************************************************************************************************

Here is a screenshot with playbook execution initiated:

setup kubernetes cluster rocky linux 01

After the setup confirm there are no errors in the output:

...output omitted...
PLAY RECAP *******************************************************************************************************************************************************************************************
k8s-master-01              : ok=28   changed=20   unreachable=0    failed=0    skipped=12   rescued=0    ignored=0
k8s-master-02              : ok=28   changed=20   unreachable=0    failed=0    skipped=12   rescued=0    ignored=0
k8s-master-03              : ok=28   changed=20   unreachable=0    failed=0    skipped=12   rescued=0    ignored=0
k8s-worker-01              : ok=27   changed=19   unreachable=0    failed=0    skipped=13   rescued=0    ignored=0
k8s-worker-02              : ok=27   changed=19   unreachable=0    failed=0    skipped=13   rescued=0    ignored=0
k8s-worker-03              : ok=27   changed=19   unreachable=0    failed=0    skipped=13   rescued=0    ignored=0
k8s-worker-04              : ok=27   changed=19   unreachable=0    failed=0    skipped=13   rescued=0    ignored=0

If you encounter any errors while running the playbook reach out using the comments section and we’ll be happy to help.

setup kubernetes cluster rocky linux 02

Login to one of the nodes and validate below settings:

  • Configured /etc/hosts file contents:
[root@k8s-master-01 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.200.10 k8s-master-01.example.com k8s-master-01
192.168.200.11 k8s-master-02.example.com k8s-master-02
192.168.200.12 k8s-master-03.example.com k8s-master-03
192.168.200.13 k8s-worker-01.example.com k8s-worker-01
192.168.200.14 k8s-worker-02.example.com k8s-worker-02
192.168.200.15 k8s-worker-03.example.com k8s-worker-03
192.168.200.16 k8s-worker-04.example.com k8s-worker-04
  • Status of cri-o service:
[root@k8s-master-01 ~]# systemctl status crio
 crio.service - Container Runtime Interface for OCI (CRI-O)
   Loaded: loaded (/usr/lib/systemd/system/crio.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2021-09-24 18:06:53 EAT; 11min ago
     Docs: https://github.com/cri-o/cri-o
 Main PID: 13445 (crio)
    Tasks: 10
   Memory: 41.9M
   CGroup: /system.slice/crio.service
           └─13445 /usr/bin/crio

Sep 24 18:06:53 k8s-master-01.example.com crio[13445]: time="2021-09-24 18:06:53.052576977+03:00" level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_>
Sep 24 18:06:53 k8s-master-01.example.com crio[13445]: time="2021-09-24 18:06:53.111352936+03:00" level=info msg="Conmon does support the --sync option"
Sep 24 18:06:53 k8s-master-01.example.com crio[13445]: time="2021-09-24 18:06:53.111623836+03:00" level=info msg="No seccomp profile specified, using the internal default"
Sep 24 18:06:53 k8s-master-01.example.com crio[13445]: time="2021-09-24 18:06:53.111638473+03:00" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
Sep 24 18:06:53 k8s-master-01.example.com crio[13445]: time="2021-09-24 18:06:53.117006450+03:00" level=info msg="Found CNI network crio (type=bridge) at /etc/cni/net.d/100-crio-bridge.co>
Sep 24 18:06:53 k8s-master-01.example.com crio[13445]: time="2021-09-24 18:06:53.120722070+03:00" level=info msg="Found CNI network 200-loopback.conf (type=loopback) at /etc/cni/net.d/200>
Sep 24 18:06:53 k8s-master-01.example.com crio[13445]: time="2021-09-24 18:06:53.120752984+03:00" level=info msg="Updated default CNI network name to crio"
Sep 24 18:06:53 k8s-master-01.example.com crio[13445]: W0924 18:06:53.126936   13445 hostport_manager.go:71] The binary conntrack is not installed, this can cause failures in network conn>
Sep 24 18:06:53 k8s-master-01.example.com crio[13445]: W0924 18:06:53.130986   13445 hostport_manager.go:71] The binary conntrack is not installed, this can cause failures in network conn>
Sep 24 18:06:53 k8s-master-01.example.com systemd[1]: Started Container Runtime Interface for OCI (CRI-O).
  • Configured sysctl kernel parameters
[root@k8s-master-01 ~]# sysctl -p
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
  • Firewalld opened ports:
[root@k8s-master-01 ~]# firewall-cmd --list-all
public (active)
  target: default
  icmp-block-inversion: no
  interfaces: enp1s0
  sources:
  services: cockpit dhcpv6-client ssh
  ports: 22/tcp 80/tcp 443/tcp 6443/tcp 2379-2380/tcp 10250/tcp 10251/tcp 10252/tcp 30000-32767/tcp 4789/udp 5473/tcp 179/tcp
  protocols:
  masquerade: no
  forward-ports:
  source-ports:
  icmp-blocks:
  rich rules:

Step 4: Bootstrap Kubernetes Control Plane (Single / Multi-node)

We’ll use kubeadm init command to initialize a Kubernetes control-plane node. It will start by running a series of pre-flight checks to validate the system state before making changes

Below are the key options you should be aware of:

  • –apiserver-advertise-address: The IP address the API Server will advertise it’s listening on. If not set the default network interface will be used.
  • –apiserver-bind-port: Port for the API Server to bind to; default is 6443
  • –control-plane-endpoint: Specify a stable IP address or DNS name for the control plane.
  • –cri-socket: Path to the CRI socket to connect. If empty kubeadm will try to auto-detect this value; use this option only if you have more than one CRI installed or if you have non-standard CRI socket.
  • –dry-run: Don’t apply any changes; just output what would be done
  • –image-repository: Choose a container registry to pull control plane images from; Default: “k8s.gcr.io
  • –kubernetes-version: Choose a specific Kubernetes version for the control plane.
  • –pod-network-cidr: Specify range of IP addresses for the pod network. If set, the control plane will automatically allocate CIDRs for every node.
  • –service-cidr: Use alternative range of IP address for service VIPs.  Default: “10.96.0.0/12

The following table lists container runtimes and their associated socket paths:

Runtime Path to Unix domain socket
Docker /var/run/dockershim.sock
containerd /run/containerd/containerd.sock
CRI-O /var/run/crio/crio.sock

Checking Kubernetes Version release notes

Release notes can be found by reading the Changelog that matches your Kubernetes version

Option 1: Bootstrapping single node Control Plane Kubernetes Cluster

If you have plans to upgrade a single control-plane kubeadm cluster to high availability you should specify the --control-plane-endpoint to set the shared endpoint for all control-plane nodes.

But if this is meant for test environment with single node control plane then you can ignore –control-plane-endpoint option.

Login to the master node:

[root@k8s-bastion ~]# ssh k8s-master-01
Warning: Permanently added 'k8s-master-01' (ED25519) to the list of known hosts.
Last login: Fri Sep 24 18:07:55 2021 from 192.168.200.9
[root@k8s-master-01 ~]#

Then initialize the Control Plane:

sudo kubeadm init --pod-network-cidr=192.168.0.0/16

Deploy Calico Pod network to the cluster

Use the commands below to deploy the Calico Pod network add-on:

kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml 
kubectl create -f https://docs.projectcalico.org/manifests/custom-resources.yaml

Check pods status with the following command:

[root@k8s-master-01 ~]# kubectl get pods -n calico-system
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-bdd5f97c5-6856t   1/1     Running   0          60s
calico-node-vnlkf                         1/1     Running   0          60s
calico-typha-5f857549c5-hkbwq             1/1     Running   0          60s

Option 2: Bootstrapping Multi-node Control Plane Kubernetes Cluster

The --control-plane-endpoint option can be used to set the shared endpoint for all control-plane nodes. This option allows both IP addresses and DNS names that can map to IP addresses

Example of A records in Bind DNS server

; Create entries for the master nodes
k8s-master-01		IN	A	192.168.200.10
k8s-master-02		IN	A	192.168.200.11
k8s-master-03		IN	A	192.168.200.12

;
; The Kubernetes cluster ControlPlaneEndpoint point these to the IP of the masters
k8s-endpoint	IN	A	192.168.200.10
k8s-endpoint	IN	A	192.168.200.11
k8s-endpoint	IN	A	192.168.200.12

Example of A records /etc/hosts file

$ sudo vim /etc/hosts

192.168.200.10 k8s-master-01.example.com k8s-master-01
192.168.200.11 k8s-master-02.example.com k8s-master-02
192.168.200.12 k8s-master-03.example.com k8s-master-03

##  Kubernetes cluster ControlPlaneEndpoint Entries ###
192.168.200.10 k8s-endpoint.example.com  k8s-endpoint
#192.168.200.11 k8s-endpoint.example.com  k8s-endpoint
#192.168.200.12 k8s-endpoint.example.com  k8s-endpoint

Using Load Balancer IP for ControlPlaneEndpoint

The most ideal approach for HA setups is mapping ControlPlane Endpoint to a Load balancer IP. The LB will then point to the Control Plane nodes with some form of health checks.

# Entry in Bind DNS Server
k8s-endpoint	IN	A	192.168.200.8

# Entry in /etc/hosts file
192.168.200.8 k8s-endpoint.example.com  k8s-endpoint

Bootstrap Multi-node Control Plane Kubernetes Cluster

Login to Master Node 01 from the bastion server or your workstation machine:

[root@k8s-bastion ~]# ssh k8s-master-01
Warning: Permanently added 'k8s-master-01' (ED25519) to the list of known hosts.
Last login: Fri Sep 24 18:07:55 2021 from 192.168.200.9
[root@k8s-master-01 ~]#

Update /etc/hosts file with this node IP address and a custom DNS name that maps to this IP:

[root@k8s-master-01 ~]# vim /etc/hosts
192.168.200.10 k8s-endpoint.example.com  k8s-endpoint

To initialize the control-plane node run:

[root@k8s-master-01 ~]# kubeadm init \
  --pod-network-cidr=192.168.0.0/16 \
  --control-plane-endpoint=k8s-endpoint.example.com \
  --cri-socket=/var/run/crio/crio.sock \
  --upload-certs

Where:

  • k8s-endpoint.example.com is a valid DNS name configured for ControlPlane Endpoint
  • /var/run/crio/crio.sock is Cri-o runtime socket file
  • 192.168.0.0/16 is your Pod network to be used in Kubernetes
  • –upload-certs Flag used ti upload the certificates that should be shared across all the control-plane instances to the cluster

If successful you’ll get an output with contents similar to this:

...output omitted...
[mark-control-plane] Marking the node k8s-master-01.example.com as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: p11op9.eq9vr8gq9te195b9
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join k8s-endpoint.example.com:6443 --token 78oyk4.ds1hpo2vnwg3yykt \
	--discovery-token-ca-cert-hash sha256:4fbb0d45a1989cf63624736a005dc00ce6068eb7543ca4ae720c7b99a0e86aca \
	--control-plane --certificate-key 999110f4a07d3c430d19ca0019242f392e160216f3b91f421da1a91f1a863bba

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join k8s-endpoint.example.com:6443 --token 78oyk4.ds1hpo2vnwg3yykt \
	--discovery-token-ca-cert-hash sha256:4fbb0d45a1989cf63624736a005dc00ce6068eb7543ca4ae720c7b99a0e86aca

Configure Kubectl as shown in the output:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Test by checking active Nodes:

[root@k8s-master-01 ~]# kubectl get nodes
NAME                                  STATUS   ROLES                  AGE    VERSION
k8s-master-01.example.com             Ready    control-plane,master   4m3s   v1.26.2

Deploy Calico Pod network to the cluster

Use the commands below to deploy the Calico Pod network add-on:

kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml 
kubectl create -f https://docs.projectcalico.org/manifests/custom-resources.yaml

Check pods status with the following command:

[root@k8s-master-01 ~]# kubectl get pods -n calico-system -w
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-bdd5f97c5-6856t   1/1     Running   0          60s
calico-node-vnlkf                         1/1     Running   0          60s
calico-typha-5f857549c5-hkbwq             1/1     Running   0          60s

Add other control plane nodes

Add Master Node 02

Login to k8s-master-02:

[root@k8s-bastion ~]# ssh k8s-master-02
Warning: Permanently added 'k8s-master-02' (ED25519) to the list of known hosts.
Last login: Sat Sep 25 01:49:15 2021 from 192.168.200.9
[root@k8s-master-02 ~]#

Update /etc/hostsfile by setting the ControlPlaneEndpoint to first control node from where bootstrap process was initiated:

[root@k8s-master-02 ~]# vim /etc/hosts
192.168.200.10 k8s-endpoint.example.com   k8s-endpoint
#192.168.200.11 k8s-endpoint.example.com  k8s-endpoint
#192.168.200.12 k8s-endpoint.example.com  k8s-endpoint

I’ll use the command printed after a successful initialization:

kubeadm join k8s-endpoint.example.com:6443 --token 78oyk4.ds1hpo2vnwg3yykt \
  --discovery-token-ca-cert-hash sha256:4fbb0d45a1989cf63624736a005dc00ce6068eb7543ca4ae720c7b99a0e86aca \
  --control-plane --certificate-key 999110f4a07d3c430d19ca0019242f392e160216f3b91f421da1a91f1a863bba
Add Master Node 03

Login to k8s-master-03:

[root@k8s-bastion ~]# ssh k8s-master-03
Warning: Permanently added 'k8s-master-02' (ED25519) to the list of known hosts.
Last login: Sat Sep 25 01:55:11 2021 from 192.168.200.9
[root@k8s-master-02 ~]#

Update /etc/hostsfile by setting the ControlPlaneEndpoint to first control node from where bootstrap process was initiated:

[root@k8s-master-02 ~]# vim /etc/hosts
192.168.200.10 k8s-endpoint.example.com  k8s-endpoint
#192.168.200.11 k8s-endpoint.example.com  k8s-endpoint
#192.168.200.12 k8s-endpoint.example.com  k8s-endpoint

I’ll use the command printed after a successful initialization:

kubeadm join k8s-endpoint.example.com:6443 --token 78oyk4.ds1hpo2vnwg3yykt \
  --discovery-token-ca-cert-hash sha256:4fbb0d45a1989cf63624736a005dc00ce6068eb7543ca4ae720c7b99a0e86aca \
  --control-plane --certificate-key 999110f4a07d3c430d19ca0019242f392e160216f3b91f421da1a91f1a863bba

Check Control Plane Nodes List

From one of the master nodes with Kubectl configured check the list of nodes:

[root@k8s-master-03 ~]# kubectl get nodes
NAME                                  STATUS   ROLES                  AGE   VERSION
k8s-master-01.example.com             Ready    control-plane,master   11m   v1.26.2
k8s-master-02.example.com             Ready    control-plane,master   5m    v1.26.2
k8s-master-03.example.com             Ready    control-plane,master   32s   v1.26.2

You can now remove uncomment other lines in /etc/hosts file on each control node if not using Load Balancer IP:

# Perform on all control plane nodes
[root@k8s-master-03 ~]# vim /etc/hosts
###  Kubernetes cluster ControlPlaneEndpoint Entries ###
192.168.200.10 k8s-endpoint.example.com  k8s-endpoint
192.168.200.11 k8s-endpoint.example.com  k8s-endpoint
192.168.200.12 k8s-endpoint.example.com  k8s-endpoint

Step 5: Adding Worker Nodes to Kubernetes Cluster

Login to each of the worker machines using ssh:

### Example ###
[root@k8s-bastion ~]# ssh root@k8s-worker-01
Warning: Permanently added 'k8s-worker-01' (ED25519) to the list of known hosts.
Enter passphrase for key '/root/.ssh/id_rsa':
Last login: Sat Sep 25 04:27:42 2021 from 192.168.200.9
[root@k8s-worker-01 ~]#

Update /etc/hosts file on each node with master and worker nodes hostnames/ip address if no DNS in place:

[root@k8s-worker-01 ~]# sudo vim /etc/hosts
### Also add Kubernetes cluster ControlPlaneEndpoint Entries for multiple control plane nodes(masters) ###
192.168.200.10 k8s-endpoint.example.com  k8s-endpoint
192.168.200.11 k8s-endpoint.example.com  k8s-endpoint
1192.168.200.12 k8s-endpoint.example.com  k8s-endpoint

Join your worker machines to the cluster using the commands given earlier:

kubeadm join k8s-endpoint.example.com:6443 \
  --token 78oyk4.ds1hpo2vnwg3yykt \
  --discovery-token-ca-cert-hash sha256:4fbb0d45a1989cf63624736a005dc00ce6068eb7543ca4ae720c7b99a0e86aca

Once done run kubectl get nodes on the control-plane to see the nodes join the cluster:

[root@k8s-master-02 ~]# kubectl get nodes
NAME                                  STATUS   ROLES                  AGE     VERSION
k8s-master-01.example.com             Ready    control-plane,master   23m     v1.26.2
k8s-master-02.example.com             Ready    control-plane,master   16m     v1.26.2
k8s-master-03.example.com             Ready    control-plane,master   12m     v1.26.2
k8s-worker-01.example.com             Ready    <none>                 3m25s   v1.26.2
k8s-worker-02.example.com             Ready    <none>                 2m53s   v1.26.2
k8s-worker-03.example.com             Ready    <none>                 2m31s   v1.26.2
k8s-worker-04.example.com             Ready    <none>                 2m12s   v1.26.2

Step 6: Deploy test application on cluster

We need to validate that our cluster is working by deploying an application. We’ll work with Guestbook application.

For single node cluster check out our guide on how to run pods on control plane nodes:

Create a temporary namespace:

$ kubectl create namespace temp
namespace/temp created

Deploy guestbook application in temp namespace created.

kubectl -n temp apply -f https://k8s.io/examples/application/guestbook/redis-leader-deployment.yaml
kubectl -n temp apply -f https://k8s.io/examples/application/guestbook/redis-leader-service.yaml
kubectl -n temp apply -f https://k8s.io/examples/application/guestbook/redis-follower-deployment.yaml
kubectl -n temp apply -f https://k8s.io/examples/application/guestbook/redis-follower-service.yaml
kubectl -n temp apply -f https://k8s.io/examples/application/guestbook/frontend-deployment.yaml
kubectl -n temp apply -f https://k8s.io/examples/application/guestbook/frontend-service.yaml

Query the list of Pods to verify that they are running after few minutes:

$ kubectl get all -n temp
NAME                                 READY   STATUS    RESTARTS   AGE
pod/frontend-85595f5bf9-j9xlp        1/1     Running   0          81s
pod/frontend-85595f5bf9-m6lsl        1/1     Running   0          81s
pod/frontend-85595f5bf9-tht82        1/1     Running   0          81s
pod/redis-follower-dddfbdcc9-hjjf6   1/1     Running   0          83s
pod/redis-follower-dddfbdcc9-vg4sf   1/1     Running   0          83s
pod/redis-leader-fb76b4755-82xlp     1/1     Running   0          7m34s

NAME                     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/frontend         ClusterIP   10.101.239.74   <none>        80/TCP     7s
service/redis-follower   ClusterIP   10.109.129.97   <none>        6379/TCP   83s
service/redis-leader     ClusterIP   10.101.73.117   <none>        6379/TCP   84s

NAME                             READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/frontend         3/3     3            3           81s
deployment.apps/redis-follower   2/2     2            2           83s
deployment.apps/redis-leader     1/1     1            1           7m34s

NAME                                       DESIRED   CURRENT   READY   AGE
replicaset.apps/frontend-85595f5bf9        3         3         3       81s
replicaset.apps/redis-follower-dddfbdcc9   2         2         2       83s
replicaset.apps/redis-leader-fb76b4755     1         1         1       7m34s

Run the following command to forward port 8080 on your local machine to port 80 on the service.

$ kubectl -n temp port-forward svc/frontend 8080:80
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80

Now load the page http://localhost:8080 in your browser to view your guestbook.

setup kubernetes cluster rocky linux 03

Step 7: Install Metrics Server ( For checking Pods and Nodes resource usage)

Metrics Server is a cluster-wide aggregator of resource usage data. It collects metrics from the Summary API, exposed by Kubelet on each node. Use our guide below to deploy it:

Step 8: Install Ingress Controller

You can also install and Ingress controller for Kubernetes workloads, you can use one of our guides for the installation process:

Step 9: Deploy Prometheus / Grafana Monitoring

Prometheus is a full fledged solution that enables you to access advanced metrics capabilities in a Kubernetes cluster. Grafana is used for analytics and interactive visualization of metrics that’s collected and stored in Prometheus database. We have a complete guide on how to setup complete monitoring stack on Kubernetes Cluster:

Step 10: Deploy Kubernetes Dashboard (Optional)

Kubernetes dashboard can be used to deploy containerized applications to a Kubernetes cluster, troubleshoot your containerized application, and manage the cluster resources.

Refer to our guide for installation:

Step 11: Persistent Storage Configuration ideas (Optional)

If you’re also looking for a Persistent storage solution for your Kubernetes, checkout:

Step 12: Deploy MetalLB on Kubernetes

Follow the guide below to install and configure MetalLB on Kubernetes:

Step 13: CRI-O Basic management using crictl – Bonus

Display information of the container runtime:

[root@k8s-master-01 ~]# sudo crictl info
{
  "status": {
    "conditions": [
      {
        "type": "RuntimeReady",
        "status": true,
        "reason": "",
        "message": ""
      },
      {
        "type": "NetworkReady",
        "status": true,
        "reason": "",
        "message": ""
      }
    ]
  }
}

Listing available images pulled on each node:

sudo crictl  images

Listing running Pods in a node:

[root@k8s-master-01 ~]# sudo crictl pods
POD ID              CREATED             STATE               NAME                                                          NAMESPACE           ATTEMPT             RUNTIME
4f9630e87f62f       45 hours ago        Ready               calico-apiserver-77dffffcdf-fvkp6                             calico-apiserver    0                   (default)
cbb3e8f3e027f       45 hours ago        Ready               calico-kube-controllers-bdd5f97c5-thmhs                       calico-system       0                   (default)
e54575c66d1f4       45 hours ago        Ready               coredns-78fcd69978-wnmdw                                      kube-system         0                   (default)
c4d03ba28658e       45 hours ago        Ready               coredns-78fcd69978-w25zj                                      kube-system         0                   (default)
350967fe5a9ae       45 hours ago        Ready               calico-node-24bff                                             calico-system       0                   (default)
a05fe07cac170       45 hours ago        Ready               calico-typha-849b9f85b9-l6sth                                 calico-system       0                   (default)
813176f56c107       45 hours ago        Ready               tigera-operator-76bbbcbc85-x6kzt                              tigera-operator     0                   (default)
f2ff65cae5ff9       45 hours ago        Ready               kube-proxy-bpqf8                                              kube-system         0                   (default)
defdbef7e8f3f       45 hours ago        Ready               kube-apiserver-k8s-master-01.example.com            kube-system         0                   (default)
9a165c4313dc9       45 hours ago        Ready               kube-scheduler-k8s-master-01.example.com            kube-system         0                   (default)
b2fd905625b90       45 hours ago        Ready               kube-controller-manager-k8s-master-01.example.com   kube-system         0                   (default)
d23524b1b3345       45 hours ago        Ready               etcd-k8s-master-01.example.com                      kube-system         0                   (default)

List running containers using crictl:

[root@k8s-master-01 ~]# sudo crictl  ps
CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
7dbb7957f9e46       98e04bee275750acf8b94e3e7dec47336ade7efda240556cd39273211d090f74   45 hours ago        Running             tigera-operator           1                   813176f56c107
22a2a949184bf       b51ddc1014b04295e85be898dac2cd4c053433bfe7e702d7e9d6008f3779609b   45 hours ago        Running             kube-scheduler            5                   9a165c4313dc9
3db88f5f14181       5425bcbd23c54270d9de028c09634f8e9a014e9351387160c133ccf3a53ab3dc   45 hours ago        Running             kube-controller-manager   5                   b2fd905625b90
b684808843527       4e7da027faaa7b281f076bccb81e94da98e6394d48efe1f46517dcf8b6b05b74   45 hours ago        Running             calico-apiserver          0                   4f9630e87f62f
43ef02d79f68e       5df320a38f63a072dac00e0556ff1fba5bb044b12cb24cd864c03b2fee089a1e   45 hours ago        Running             calico-kube-controllers   0                   cbb3e8f3e027f
f488d1d1957ff       8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44   45 hours ago        Running             coredns                   0                   e54575c66d1f4
db2310c6e2bc7       8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44   45 hours ago        Running             coredns                   0                   c4d03ba28658e
823b9d049c8f3       355c1ee44040be5aabadad8a0ca367fbadf915c50a6ddcf05b95134a1574c516   45 hours ago        Running             calico-node               0                   350967fe5a9ae
5942ea3535b3c       8473ae43d01b845e72237bf897fda02b7e28594c9aa8bcfdfa2c9a55798a3889   45 hours ago        Running             calico-typha              0                   a05fe07cac170
9072655f275b1       873127efbc8a791d06e85271d9a2ec4c5d58afdf612d490e24fb3ec68e891c8d   45 hours ago        Running             kube-proxy                0                   f2ff65cae5ff9
3855de8a093c1       0048118155842e4c91f0498dd298b8e93dc3aecc7052d9882b76f48e311a76ba   45 hours ago        Running             etcd                      4                   d23524b1b3345
87f03873cf9c4       e64579b7d8862eff8418d27bf67011e348a5d926fa80494a6475b3dc959777f5   45 hours ago        Running             kube-apiserver            4                   defdbef7e8f3f

List container(s) resource usage statistics in a Kubernetes node:

[root@k8s-master-01 ~]# sudo crictl stats
CONTAINER           CPU %               MEM                 DISK                INODES
22a2a949184bf       0.89                27.51MB             232B                13
3855de8a093c1       1.85                126.8MB             297B                17
3db88f5f14181       0.75                68.62MB             404B                21
43ef02d79f68e       0.00                24.32MB             437B                18
5942ea3535b3c       0.09                26.72MB             378B                20
7dbb7957f9e46       0.19                30.97MB             368B                20
823b9d049c8f3       1.55                129.5MB             12.11kB             93
87f03873cf9c4       3.98                475.7MB             244B                14
9072655f275b1       0.00                20.13MB             2.71kB              23
b684808843527       0.71                36.68MB             405B                21
db2310c6e2bc7       0.10                22.51MB             316B                17
f488d1d1957ff       0.09                21.5MB              316B                17

Fetch the logs of a container:

[root@k8s-master-01 ~]# sudo crictl ps
[root@k8s-master-01 ~]# sudo crictl logs <containerid>

# Example
[root@k8s-master-01 ~]# sudo crictl logs 9072655f275b1
I0924 18:06:37.800801       1 proxier.go:659] "Failed to load kernel module with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack_ipv4"
I0924 18:06:37.815013       1 node.go:172] Successfully retrieved node IP: 192.168.200.10
I0924 18:06:37.815040       1 server_others.go:140] Detected node IP 192.168.200.10
W0924 18:06:37.815055       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
I0924 18:06:37.833413       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
I0924 18:06:37.833459       1 server_others.go:212] Using iptables Proxier.
I0924 18:06:37.833469       1 server_others.go:219] creating dualStackProxier for iptables.
W0924 18:06:37.833487       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
I0924 18:06:37.833761       1 server.go:649] Version: v1.26.2
I0924 18:06:37.837601       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0924 18:06:37.837912       1 config.go:315] Starting service config controller
I0924 18:06:37.837921       1 shared_informer.go:240] Waiting for caches to sync for service config
I0924 18:06:37.838003       1 config.go:224] Starting endpoint slice config controller
I0924 18:06:37.838007       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
E0924 18:06:37.843521       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"k8s-master-01.example.com.16a7d4488288392d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc04ba2cb71ef9faf, ext:72545250, loc:(*time.Location)(0x2d81340)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-k8s-master-01.example.com", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"k8s-master-01.example.com", UID:"k8s-master-01.example.com", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "k8s-master-01.example.com.16a7d4488288392d" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
I0924 18:06:37.938785       1 shared_informer.go:247] Caches are synced for endpoint slice config
I0924 18:06:37.938858       1 shared_informer.go:247] Caches are synced for service config

Books For Learning Kubernetes Administration:

Conclusion

You now have a functional Multi-node Kubernetes Cluster deployed on Rocky Linux 8 servers. Our cluster has three control plane nodes, which can guarantee high availability when placed in front of a Load Balancer with node health state checks. In this guide we used CRI-O as a lightweight alternative to the Docker container runtime for our Kubernetes setup. We also chose Calico plugin for the Pod network.

You’re now ready to deploy your containerized applications on the cluster or join more worker nodes to the cluster. We have other articles written earlier to help you start Kubernetes journey:

.tdi_4.td-a-rec{text-align:center}.tdi_4 .td-element-style{z-index:-1}.tdi_4.td-a-rec-img{text-align:left}.tdi_4.td-a-rec-img img{margin:0 auto 0 0}@media(max-width:767px){.tdi_4.td-a-rec-img{text-align:center}}

RELATED ARTICLES

Most Popular

Recent Comments