Wednesday, November 20, 2024
Google search engine
HomeGuest BlogsDeploy Kubernetes Cluster on Linux With k0s

Deploy Kubernetes Cluster on Linux With k0s

Have you ever wondered if there is any simple way to setup a Kubernetes cluster? Yes there is. K0s is a simple, solid and certified Kubernetes distribution that can be deployed on any infrastructure. This means that K0s can run on any private or public cloud environment.

K0s is bundled into a binary with all the minimal required dependencies for installation deployed with a single command. This drastically reduces the complexity of installing and configuring your Kubernetes cluster. This also means that engineers with no skills on Kubernetes can still be able to setup a Kubernetes cluster.

K0s is also as secure as the normal K8s as the vulnerability issues can easily be fixed directly in the k0s distribution. K0s is also open source!

Features of k0s

K0s is a fully featured Kubernetes deployment and ships with all the features of Kubernetes. Some of these features include:

  1. Container Runtime – ContainerD
  2. Support for all Linux OS and Windows 2019
  3. Support for x86-64, ARM64 and ARMv7
  4. Built-in security and conformance for CIS
  5. Supported CNI providers: Kube-Router(Default), Calico, custom.
  6. Supported storage & CSI providers – Kubernetes-in-Three storage providers, custom.

There are more features to k0s which are discussed on their official website.

Setup k0s Cluster on Linux

This article will cover the simple steps required to setup a k0s cluster on Linux. The steps are highlighted below:

  1. Setup K0s control plane
  2. Join worker node to K0s cluster
  3. Add more controller nodes to the k0s cluster
  4. Deploy sample application on k0s
  5. Manage k0s cluster remotely

Setup k0s Control Plane

Step 1. Download k0s binary file

As we had highlighted before, K0s is simple to setup. Run the command below to download and install K0s on the node you intend to use as the control plane:

curl -sSLf https://get.k0s.sh | sudo sh

The command above downloads and saves the k0s binary file to /usr/local/bin. Please make sure that the path /usr/local/bin has been declared as an binary path. You can verify that by running the command echo $PATH. If the path above is not listed, you will have to add the path using the command below:

echo "export PATH=\$PATH:/usr/local/bin" | sudo tee -a /etc/profile
source /etc/profile

Step 2. Install k0s controller

Switch to root user:

sudo su -

Install the k0s controller and enable worker.

k0s install controller --enable-worker

Wait for the installation to complete then start the controller.

sudo systemctl start k0scontroller

Check the status of the controller to see if it has started successfully.

# systemctl status k0scontroller
● k0scontroller.service - k0s - Zero Friction Kubernetes
   Loaded: loaded (/etc/systemd/system/k0scontroller.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2022-04-17 15:55:02 EAT; 58s ago
     Docs: https://docs.k0sproject.io
 Main PID: 76539 (k0s)
    Tasks: 158
   Memory: 293.6M
   CGroup: /system.slice/k0scontroller.service
           ├─ 5095 /var/lib/k0s/bin/containerd-shim-runc-v2 -namespace k8s.io -id aeb8cd09ada8cdf6abb1750558573cfe401ea5803a6358efb85be5cf76aea88e -address /run/k0s/containerd.sock
           ├─ 5113 /var/lib/k0s/bin/containerd-shim-runc-v2 -namespace k8s.io -id 7aee6f50d8b084bf75fa5224420b6d24f101d547eea7c093e43c6d0d85e9a0cb -address /run/k0s/containerd.sock
           ├─ 6757 /var/lib/k0s/bin/containerd-shim-runc-v2 -namespace k8s.io -id 3c1f1327416489fa53386bb01bf6aad7ab2b0c732191eb04104f8338d6c78ddc -address /run/k0s/containerd.sock
           ├─ 6858 /var/lib/k0s/bin/containerd-shim-runc-v2 -namespace k8s.io -id d0300d503aa4048f0c766b739084f5deba145ac0a0b6297689dff6d2f81d7c12 -address /run/k0s/containerd.sock
           ├─ 6964 /var/lib/k0s/bin/containerd-shim-runc-v2 -namespace k8s.io -id cefba75a46f0731605ed16bacb7a15431425a1a55d6c62972bbe1447761555c2 -address /run/k0s/containerd.sock
           ├─39473 /var/lib/k0s/bin/containerd-shim-runc-v2 -namespace k8s.io -id aa36e5a1504b238c886da9eb24cc39d62b69a5758c835dd43d055d4d70ec13bb -address /run/k0s/containerd.sock
           ├─76539 /usr/local/bin/k0s controller --enable-worker=true
           ├─76589 /var/lib/k0s/bin/etcd --cert-file=/var/lib/k0s/pki/etcd/server.crt --peer-cert-file=/var/lib/k0s/pki/etcd/peer.crt --log-level=info --auth-token=jwt,pub-key=/var/lib/k0s/pki/etcd/jwt.pub,priv>
           ├─76601 /var/lib/k0s/bin/kube-apiserver --kubelet-client-certificate=/var/lib/k0s/pki/apiserver-kubelet-client.crt --proxy-client-key-file=/var/lib/k0s/pki/front-proxy-client.key --requestheader-clie>
           ├─76613 /var/lib/k0s/bin/konnectivity-server --agent-service-account=konnectivity-agent --authentication-audience=system:konnectivity-server --logtostderr=true --stderrthreshold=1 --cluster-cert=/var>
           ├─76621 /var/lib/k0s/bin/kube-scheduler --authorization-kubeconfig=/var/lib/k0s/pki/scheduler.conf --kubeconfig=/var/lib/k0s/pki/scheduler.conf --v=1 --bind-address=127.0.0.1 --leader-elect=true --pr>
           ├─76626 /var/lib/k0s/bin/kube-controller-manager --requestheader-client-ca-file=/var/lib/k0s/pki/front-proxy-ca.crt --service-cluster-ip-range=10.96.0.0/12 --allocate-node-cidrs=true --profiling=fals>
           ├─76639 /usr/local/bin/k0s api --config= --data-dir=/var/lib/k0s
           └─76650 /var/lib/k0s/bin/containerd --root=/var/lib/k0s/containerd --state=/run/k0s/containerd --address=/run/k0s/containerd.sock --log-level=info --config=/etc/k0s/containerd.toml

Enable the service to start at boot

sudo systemctl enable k0scontroller

You can now access your k0s cluster using the command prefix:

# k0s kubectl <command>

K0s ships with kubeadmin and kubectl. Therefore you don’t have to install anything else to use the cluster.

Check that the pods have started.

k0s kubectl get pod

Sample output:

# k0s kubectl get pods --all-namespaces
NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE
kube-system   coredns-7bf57bcbd8-sq5zm          1/1     Running   0          67s
kube-system   konnectivity-agent-rjrpk          1/1     Running   0          62s
kube-system   kube-proxy-dd9rm                  1/1     Running   0          65s
kube-system   kube-router-wf587                 1/1     Running   0          66s
kube-system   metrics-server-7446cc488c-sg9ww   1/1     Running   0          62s

Note that it takes sometime for the pods to come up.

Check the available nodes

# k0s kubectl get nodes -o wide
NAME              STATUS   ROLES           AGE   VERSION       INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                           KERNEL-VERSION                 CONTAINER-RUNTIME
rocky8.mylab.io   Ready    control-plane   80s   v1.26.2+k0s   65.109.11.149   <none>        Rocky Linux 8.7 (Green Obsidian)   4.18.0-425.13.1.el8_7.x86_64   containerd://1.6.18

Join Worker Nodes to k0s Cluster

The next phase is to add worker nodes to the k0s cluster. To do so, we need to download and install k0s binary file on the server that we intend to use as the worker node.

Before we can start setting up the worker node, we are required to generate the authentication token that we will use to join the node to the cluster.

On the controller node, run the command below to generate the token

k0s token create --role=worker

The above command generates a string of characters that will be used to authenticate the worker node.

Download k0s binary on the worker node

sudo su -
curl -sSLf https://get.k0s.sh | sudo sh

When the binary finally downloads successfully, run the command below to authenticate the node to the controller.

k0s worker <login-token> &

Replace the <login-token> with the token generated on the controller node in the previous step. e.g

sudo k0s worker "H4sIAAAAAAAC/2xVUY+bPBZ9n1+RPzCtDcmoRNqHktghZHBqY5vgN8B0IDbEBSZhstr/vpq0lXal7+36nqNzrKure54K18p6GNtLv15c4VNl38epHsb10/PiT71+WiwWi7EervWwXjTT5Mb1168w8L7Al29fIABfPBisX5ZL/0Gs6mFqf7ZVMdXPxfvUXIZ2+njWxVSsF68pmF5TuGFCx7wNt0zGIhUqpgAL9sDAtDE....6n31Hxu/4dFX9z48H6bLyPj9d7Wdt6ei4vl2mchsL9v9r7MNT99PxX6dE0ba/Xi82l/9m+Pbmh/lkPdV/V43rx7/88fao+zP+I/IP8w/jxheli6n69COr6rfW/jFr/fHmf9bJrpzME8Om/AQAA//8V8FiGBwcAAA==" & 

Give the node some few moments before it becomes ready. This is because the pods for nework, dns and proxy are being initialized.

You can then check the available nodes to confirm if the node has been added successfully.

# kubectl get nodes -o wide
NAME         STATUS     ROLES    AGE     VERSION       INTERNAL-IP       EXTERNAL-IP   OS-IMAGE                           KERNEL-VERSION                CONTAINER-RUNTIME
rocky8       Ready      <none>   6h13m   v1.23.5+k0s   192.168.100.219   <none>        Rocky Linux 8.4 (Green Obsidian)   4.18.0-305.7.1.el8_4.x86_64   containerd://1.4.6
node01       Ready      <none>   70s     v1.23.5+k0s   192.168.100.149   <none>        CentOS Linux 7 (Core)              3.10.0-1160.31.1.el7.x86_64   containerd://1.4.6

Add Control nodes to K0s cluster

Just like the worker nodes, we will need to create a token on the existing controller node for an aditional control node.

sudo su -
k0s token create --role=controller

Download the k0s binary on the host that you intend to use as a secondary control node:

sudo su -
curl -sSLf https://get.k0s.sh | sudo sh

Join the node to the cluster using the token generated earlier

k0s controller "<token>"

Example:

k0s controller "H4sIAAAAAAAC/3RVwXKjOhbd5yv8A+kngZ2OXTWLxpaMcZBbQhJGO0AkYAlQA7GJp+bfp5J+XTWzeLure06do7p1656H3DWyGsam7zaLK3wo7fs4VcO4eXhc/F1vHhaLxWKshms1bBb1NLlx89dfcO19g0/P3yAA3zy43qyXS/+LWFbD1Lw2ZT5Vj/n7VPdDM3086nzKN4uXBEwvCdwyoSPeBDsmI5EIFVGABfvCwLQWFgNScNsRHLgThSyVZaBsFPDo5/32bMDX2jQCYS2X0iSCCR+LrLL7T/18Ni0eVttVkYMD6UfTdV8/Q7Kn7Xv6PiT258sT4b7+Pn65Mz9NZWw2PR99M4Dbn7f8H3Yai66fGP2FfTNJ3eLLZ999q8Pbiheq2GqiurcbP4938ePoW//H+L/IPDl/3XR6beVN1mMS+fvz9X396+L4en9++v93v9alfv7w//DQAA//8NPyCvDQcAAA=="

After a successful authentication, check the status of the new controller node:

# k0s status
Version: v1.26.2+k0s.0
Process ID: 32061
Role: controller
Workloads: true
SingleNode: false
Kube-api probing successful: true

We can confirm that the new node has joined and is running with the role of a controller. This means that the node won’t be listed on the available nodes when you run the get nodes command.

Be advised that setting up a secondary controller node may cause instability to your k0s cluster and it is advised that you use a single controller node at the moment.

Manage k0s cluster remotely

You might want to manage your cluster remotely using the native kubectl. K0s stores the KUBECONFIG file at /var/lib/k0s/pki/admin.conf.

This means that you can easily copy the config file and use it to access your cluster remotely.

Copy the config file to the user home directory.

sudo cp /var/lib/k0s/pki/admin.conf ~/k0s.conf

Download the k0s.conf file form the controller node to your remote server

scp  <username>@<SERVER_IP>:~/k0s.conf .

Example:

scp [email protected]:~/k0s.conf .

Modify the host details in the k0s.conf from localhost to the IP address of the controller node. Then export the config file.

export KUBECONFIG=k0s.conf

You can now manage your cluster remotely using kubectl.

Deploy Applications on k0s

We can test if application deployment is working on the k0s cluster. We will use Nginx application.

kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2 
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80
EOF

Check if the pod is running:

$ kubectl get pods
NAME                               READY   STATUS    RESTARTS   AGE
nginx-deployment-585449566-csvsf   1/1     Running   0          105s
nginx-deployment-585449566-hhx6z   1/1     Running   0          106s

Two pods have started and are in running state. This is because we specified the number or replicas to 2 in our config file.

To expose the service, we will use NodePort service to expose nginx deployment

$ kubectl expose deployment nginx-deployment --type=NodePort --port=80
service/nginx-deployment exposed

Check the port on which the service has been exposed to

$ kubectl get svc
NAME               TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
kubernetes         ClusterIP   10.96.0.1     <none>        443/TCP        7h23m
nginx-deployment   NodePort    10.96.242.1   <none>        80:31710/TCP   94s

The service has been exposed on port 31710 using NodePort. We can access the application on the browser by using the worker-node ip and the port.

setup k0s cluster on

Uninstalling k0s Kubernetes Cluster

To uninstall k0s you need to stop the k0s service then remove the installation on each node.

sudo k0s stop

Remove k0s setup

$ sudo k0s reset
INFO[2021-07-13 11:46:59] no config file given, using defaults         
INFO[2021-07-13 11:46:59] * remove k0s users step:                     
INFO[2021-07-13 11:46:59] no config file given, using defaults         
INFO[2021-07-13 11:47:03] * uninstal service step                      
INFO[2021-07-13 11:47:03] Uninstalling the k0s service                 
INFO[2021-07-13 11:47:03] * remove directories step                    
INFO[2021-07-13 11:47:04] * CNI leftovers cleanup step                 
INFO k0s cleanup operations done. To ensure a full reset, a node reboot is recommended. 

For Ingress controller check out the article below on Traefik:

Books For Learning Kubernetes Administration:

Conclusion

That’s it all for deploying k0s cluster. We have discovered that there are very few steps involved in setting up this kind of cluster. Feel free to reach out in case you encounter problems in your setup. Also check out other related articles on the site

RELATED ARTICLES

Most Popular

Recent Comments