Saturday, December 28, 2024
Google search engine
HomeGuest BlogsInstall Kubernetes Cluster on Debian with Kubespray

Install Kubernetes Cluster on Debian with Kubespray

In this guide, we shall be focusing on Kubernetes. Yes, this platform that continues to transform how we deploy and manage business applications. Whether it is CI/CD or you prefer the manual way, Kubernetes still remains the best choice to handle, manage, scale, and orchestrate your applications. For those who do not know, Kubernetes will be deprecating dockershim in future releases which means they will not be supporting docker as the container runtime. Before you throw tantrums, it is important to note that the change will not be affecting how you build and deploy your images. Docker contains a lot of components that Kubernetes has no use of bearing in mind that they need a simple light-weight container runtime to launch and run containers.

As you have already guessed, you can continue building your images using docker and Kubernetes will pull and run them using other container run times such as containerd or CRI-O. Containerd and CRI-O are pretty lightweight and they plug in fantastically to the Container Runtime Interface specifications.

Installation pre-requisites

In order for this deployment to start and succeed, we are going to need an extra server or computer that will be used as the installation server. This machine will contain Kubespray files and will connect to your servers where kubernetes will be installed and proceed to setup Kubernetes in them. The deployment architecture is simplified by the diagram below with one master, one etcd and two worker nodes.

kubernetes

Make sure you generate SSH keys on the builder machine and copy your public key to the Debian 10 servers where Kubernetes will be built.

Having said all of that, we are going to install Kubernetes on Debian 10 servers using Kubespray and we shall be using containerd as the container runtime. The following steps will suffice to get your cluster ready to accommodate your apps.

Step 1: Prepare your servers

Preparing your servers is a crucial step which ensures that every aspect of the deployment runs smoothly till the very end. In this step, we shall be doing simple updates and making sure that important packages have been installed. Issue the commands below in each of your servers to kick everything off on a clean slate.

sudo apt update && sudo apt upgrade -y

Step 2: Clone Kubespray Git repository

In this step, we are going to fetch Kubespray files in our local machine (the installer machine) then make the necessay configurations by choosing containerd as the container run time as well as populating the requisite files with the details of our servers (etc, masters, workers).

cd ~
sudo apt install git vim
git clone https://github.com/kubernetes-sigs/kubespray.git

Change to the project directory:

cd kubespray

This directory contains the inventory files and playbooks used to deploy Kubernetes.

Step 3: Prepare Local machine

On the Local machine where you’ll run deployment from, you need to install Python 3. We assume you’re performing this on Ubuntu / Debian system.

We’ll install Python 3.10. Let’s ensure the dependencies are installed.

sudo apt update
sudo apt install wget libreadline-gplv2-dev libncursesw5-dev libssl-dev libsqlite3-dev tk-dev libgdbm-dev libc6-dev libbz2-dev libffi-dev zlib1g-dev build-essential

Clone

VERSION=3.10.12
wget https://www.python.org/ftp/python/${VERSION}/Python-${VERSION}.tgz
tar xzf Python-${VERSION}.tgz
cd Python-${VERSION}/
sudo ./configure --enable-optimizations
sudo make altinstall

Confirm Python is installed

$ python 3.10 -V
Python 3.10.12

Install Pip for Python 3.10:

curl -sS https://bootstrap.pypa.io/get-pip.py | python3.10

Check version of Pip by running the following commands:

$ pip3.10 -V
pip 23.2.1 from /usr/local/lib/python3.10/site-packages/pip (python 3.10)

Step 4: Create Kubernetes Cluster inventory file

The inventory is composed of 3 groups:

  • kube-node : list of kubernetes nodes where the pods will run.
  • kube-master : list of servers where kubernetes master components (apiserver, scheduler, controller) will run.
  • etcd: list of servers to compose the etcd server. You should have at least 3 servers for failover purpose.

There are also two special groups:

  • calico-rr : explained for advanced Calico networking cases
  • bastion : configure a bastion host if your nodes are not directly reachable

Create an inventory file:

cp -rfp inventory/sample inventory/mycluster

Define your inventory with your server’s IP addresses and map to correct node purpose.

$ vim inventory/mycluster/inventory.ini
[all]
master01 ansible_host=192.168.1.10 etcd_member_name=etcd1   ansible_user=debian
master02 ansible_host=192.168.1.11 etcd_member_name=etcd2   ansible_user=debian
master03 ansible_host=192.168.1.12 etcd_member_name=etcd3   ansible_user=debian
node01   ansible_host=192.168.1.13 etcd_member_name=        ansible_user=debian
node02   ansible_host=192.168.1.14 etcd_member_name=        ansible_user=debian
node03   ansible_host=192.168.1.15 etcd_member_name=        ansible_user=debian

# ## configure a bastion host if your nodes are not directly reachable
# [bastion]
# bastion ansible_host=x.x.x.x ansible_user=some_user

[kube_control_plane]
master01
master02
master03

[etcd]
master01
master02
master03

[kube_node]
node01
node02
node03

[calico_rr]

[k8s_cluster:children]
kube_control_plane
kube_node
calico_rr

You can add A records to /etc/hosts on your workstation.

$ sudo vim /etc/hosts
192.168.1.10 master01
192.168.1.11 master02
192.168.1.12 master03
192.168.1.13 node01
192.168.1.14 node01
192.168.1.15 node01

If your private ssh key has passphrase, save it before starting deployment.

$ eval `ssh-agent -s` && ssh-add
Agent pid 4516
Enter passphrase for /home/centos/.ssh/id_rsa: 
Identity added: /home/centos/.ssh/id_rsa (/home/centos/.ssh/id_rsa)

Install dependencies from requirements.txt

sudo pip3.10 install -r requirements.txt

Confirm ansible installation.

$ ansible --version
ansible [core 2.14.9]
  config file = /root/kubespray/ansible.cfg
  configured module search path = ['/root/kubespray/library']
  ansible python module location = /usr/local/lib/python3.10/site-packages/ansible
  ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
  executable location = /usr/local/bin/ansible
  python version = 3.10.12 (main, Aug 17 2023, 11:11:59) [GCC 8.3.0] (/usr/local/bin/python3.10)
  jinja version = 3.1.2
  libyaml = True

Review and change parameters under inventory/mycluster/group_vars

We shall review and change parameters under inventory/mycluster/group_vars to ensure that Kubespray uses containerd.

$ vim inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
container_manager: containerd
kube_service_addresses: 10.233.0.0/18
kube_pods_subnet: 10.233.64.0/18

Step 5: Deploy Kubernetes Cluster with Kubespray

Now execute the playbook to deploy Production ready Kubernetes with Ansible. Please note that the target servers must have access to the Internet in order to pull images.

Start the deployment by running the command:

ansible-playbook -i inventory/mycluster/inventory.ini --become --user=debian --become-user=root cluster.yml

Replace “tech” with the remote user ansible will connect to the nodes as. You should not get failed task in execution.

Installation progress should look like below

kubespray progress

The very last messages will look like the screenshot shared below.

kubespray recap

Once the playbook executes to the tail end, login to the master node and check cluster status.

$ sudo kubectl cluster-info
Kubernetes master is running at https://172.20.193.154:6443

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

You can also check the nodes

$ sudo kubectl get nodes
NAME      STATUS   ROLES    AGE   VERSION
master0   Ready    master   11m   v1.19.5
worker1   Ready    <none>   10m   v1.19.5
worker2   Ready    <none>   10m   v1.19.5

$ sudo kubectl get endpoints -n kube-system
NAME                      ENDPOINTS                                                     AGE
coredns                   10.233.101.1:53,10.233.103.1:53,10.233.101.1:53 + 3 more...   23m
kube-controller-manager   <none>                                                        27m
kube-scheduler            <none>                                                        27m

Step 6: Install Kubernetes Dashboard & Access

This is an optional step in case you do not have other options to access your Kubernetes cluster via a cool interface like Lens or VMware Octant .To get the dashboard installed, follow the detailed guide below.

And once it is working, you will need to create an admin user to access your cluster. Use the guide below to fix that:

If you like, you can also use Active Directory to authenticate your users using how to Authenticate Kubernetes Dashboard Users With Active Directory.

Step 7: Install Nginx-Ingress controller

In this step, we are going to include an ingress controller to help us access our services from outside the cluster. The simplest approach to enabling external access to services is using the NodePort service type. The disadvantage of NodePort is that services must use a limited range of ports (by default, in the 30000 to 32767 range), and just a single port can be mapped to a single service.

The ingress resource makes it possible to expose multiple services using a singular external endpoint, a load balancer, or both at once. Taking this approach, teams can enact host, prefix, and other rules to route traffic to defined service resources however they prefer.

Refer to our guide below on the installation of Nginx Ingress Controller:

For Traefik check out:

Step 8: Add Ingress Rule to access your service

Until now, we have a working ingress controller and a sample deployment (httpbin) that we shall use to test how it works. Create the following Ingress resource that will target httpbin. If you read the manifest we fetched earlier, you will notice that it creates a service called “httpbin” and will be listening at port 8000. Armed with that information, let us create an Ingress thus:

$ vim httpbin-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: httpbin-ingress
  namespace: default
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
    - host: master.geeksforgeeks.org
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: httpbin
                port:
                  number: 8000

Save the file then apply it in your cluster. Note that “master.geeksforgeeks.org” must be resolving to the IP of the ingress as shown below.

$ sudo kubectl apply -f httpbin-ingress.yaml
ingress.networking.k8s.io/httpbin-ingress configured

Then confirm that the ingress was created successfully

$ sudo kubectl get ing
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
NAME              CLASS    HOSTS                           ADDRESS          PORTS   AGE
approot           <none>   master1.geeksforgeeks.org   172.20.202.161   80      168m
httpbin-ingress   <none>   master.geeksforgeeks.org    172.20.202.161   80      108m

What is happening here is that, any traffic that will come from “master.geeksforgeeks.org” root url, it will be routed to the httpbin service automatically. Is that not pretty!

What else are we left to do than to test if we can reach our serivice? First, let us investigate how our Ingress Controller service looks like.

$ sudo kubectl get svc -n ingress-nginx
NAME                                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             NodePort    10.233.8.37    <none>        80:30242/TCP,443:31635/TCP   123m       
ingress-nginx-controller-admission   ClusterIP   10.233.51.71   <none>        443/TCP                      123m 

As you will notice, our “ingress-nginx-controller” is exposed via NodePort. That is very good information because we can be really frustrated while accessing our applications in other ways. With that in mind, let us now open up our browser, and point it to our application at //master.geeksforgeeks.org:30242. And ensure that port 30242 has been opened in “master.geeksforgeeks.org” node’s firewall. You should see something eye popping as shown below:

kubespray httpbin ingress

Conclusion

Kubespray makes the deployment of Kubernetes a cinch. Thanks to the team that developed the playbooks involved in achieving this complex deployment, we now have a ready platform just waiting for your applications that will serve the world.

In case you have a bigger cluster you intend to setup, simply place the various components (etcd, master, workers etc) in the deployment scripts and Kubespray will handle the rest. May your year flourish, your endeavor bear good fruits and your investments pay off. Let us face it with fortitude, with laughter, hard work and grace.

Other guides you might enjoy:

RELATED ARTICLES

Most Popular

Recent Comments