Wednesday, July 3, 2024
HomeOperating SystemsUbuntuInstall Kubernetes Cluster on Ubuntu 22.04 using kubeadm

Install Kubernetes Cluster on Ubuntu 22.04 using kubeadm

Kubernetes (aka K8s) is a project that was created originally by Google but is now being maintained by the Cloud Native Computing Foundation. Kubernetes can be defined as a powerful and open-source container orchestration system that enables you to automate deployment and management of containerized workloads and services. K8s depends on virtualizing the operating systems rather than virtualizing the hardware.

Kubernetes automates many operational tasks in software deployments and allows the user to schedule and run containers on clusters of physical or virtual machines in public cloud, private cloud and hybrid environments. Kubernetes is an ideal system for the management of containers that improves the utilization of resources and reduces costs. Kubernetes is known to also optimize hardware resources, improve DevOps productivity, and manage changes to existing containerized applications.

Services provided by Kubernetes

  • Self-healing: Another container is created automatically in a place of malfunctioned one
  • Horizontal scaling: It can adds or remove containers/Pods based on CPU and other resource metrics
  • Compute scheduling: It automates resource determination and assignment per container
  • Service discovery/load balancing: Multiple instances, DNS and IP address are load-balanced
  • Automated rollouts and rollbacks: If a new instance should fail, a rollback to a previous version happens automatically.
  • Secret & configuration management: Automated management of application’s secrets like API keys, app configurations, passwords, digital certificates and other security factors.
  • Volume management: Automated management of persistent storage according to application use

For Rocky Linux 8: Install Kubernetes Cluster on Rocky Linux 8 with Kubeadm & CRI-O

Kubernetes Nodes

There are two categories of Nodes in a Kubernetes cluster, namely:

  • Master Nodes: This handles the control API calls for the pods, replications controllers, services, nodes and other components of a Kubernetes cluster.
  • Node: Provides the run-time environments for the containers. A set of container pods can span multiple nodes.

In your Master / Worker nodes, the following are minimum hardware requirements recommendation.

  • Memory: 2 GiB or more of RAM per machine
  • CPUs: At least 2 CPUs on the control plane machine.
  • Internet connectivity for pulling containers required (Private registry can also be used)
  • Full network connectivity between machines in the cluster – This is private or public

Deploy Kubernetes Cluster on Ubuntu 22.04 using kubeadm

The Labs used in this guide has three servers – One Master Node and two Worker nodes where containerized workloads will run. Additional nodes can be added to suit your desired environment load requirements. For HA, three control plane nodes are required with control plane API endpoint.

Nodes that will be used in this guide.

Server Role Server Hostname Specs IP Address
Master Node k8smas01.neveropen.co.za 4GB Ram, 2vcpus 172.20.30.10
Worker Node k8swkr01.neveropen.co.za 4GB Ram, 2vcpus 172.20.30.13
Worker Node k8swkr02.neveropen.co.za 4GB Ram, 2vcpus 172.20.30.14

1. Upgrade your Ubuntu servers

Provision the servers to be used in the deployment of Kubernetes on Ubuntu 22.04. The setup process will vary depending on the virtualization or cloud environment you’re using.

Once the servers are ready, update them.

sudo apt update
sudo apt -y full-upgrade
[ -f /var/run/reboot-required ] && sudo reboot -f

2. Install kubelet, kubeadm and kubectl

Once the servers are rebooted, add Kubernetes repository for Ubuntu 22.04 to all the servers.

sudo apt install curl apt-transport-https -y
curl -fsSL  https://packages.cloud.google.com/apt/doc/apt-key.gpg|sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/k8s.gpg
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

Then install required packages.

sudo apt update
sudo apt install wget curl vim git kubelet kubeadm kubectl -y
sudo apt-mark hold kubelet kubeadm kubectl

Confirm installation by checking the version of kubectl.

$ kubectl version --client
Client Version: v1.28.0
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3

$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"28", GitVersion:"v1.28.0", GitCommit:"855e7c48de7388eb330da0f8d9d2394ee818fb8d", GitTreeState:"clean", BuildDate:"2023-08-15T10:20:15Z", GoVersion:"go1.20.7", Compiler:"gc", Platform:"linux/amd64"}

3. Disable Swap Space

Disable all swaps from /proc/swaps.

sudo swapoff -a 

Check if swap has been disabled by running the free command.

$ free -h
               total        used        free      shared  buff/cache   available
Mem:           7.7Gi       283Mi       5.9Gi       1.0Mi       1.5Gi       7.2Gi
Swap:             0B          0B          0B

Now disable Linux swap space permanently in /etc/fstab. Search for a swap line and add # (hashtag) sign in front of the line.

sudo sed -i.bak -r 's/(.+ swap .+)/#\1/' /etc/fstab

Or manually edit.

$ sudo vim /etc/fstab
#/swap.img	none	swap	sw	0	0

Confirm setting is correct

sudo mount -a
free -h

Enable kernel modules and configure sysctl.

# Enable kernel modules
sudo modprobe overlay
sudo modprobe br_netfilter

# Add some settings to sysctl
sudo tee /etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

# Reload sysctl
sudo sysctl --system

4. Install Container runtime (Master and Worker nodes)

To run containers in Pods, Kubernetes uses a container runtime. Supported container runtimes are:

  • Docker
  • CRI-O
  • Containerd

NOTE: You have to choose one runtime at a time.

1) Docker runtime

If your container runtime of choice is Docker CE, follow steps provided below to configure it.

# Add repo and Install packages
sudo apt update
sudo apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt update
sudo apt install -y containerd.io docker-ce docker-ce-cli

# Create required directories
sudo mkdir -p /etc/systemd/system/docker.service.d

# Create daemon json config file
sudo tee /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF

# Start and enable Services
sudo systemctl daemon-reload 
sudo systemctl restart docker
sudo systemctl enable docker

# Configure persistent loading of modules
sudo tee /etc/modules-load.d/k8s.conf <<EOF
overlay
br_netfilter
EOF

# Ensure you load modules
sudo modprobe overlay
sudo modprobe br_netfilter

# Set up required sysctl params
sudo tee /etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

For Docker Engine you need a shim interface. You can install Mirantis cri-dockerd as covered in the guide below.

Mirantis cri-dockerd CRI socket file path is /run/cri-dockerd.sock. This is what will be used when configuring Kubernetes cluster.

2) Installing CRI-O runtime

For CRI-O, use the commands shared below:

# Configure persistent loading of modules
sudo tee /etc/modules-load.d/k8s.conf <<EOF
overlay
br_netfilter
EOF

# Ensure you load modules
sudo modprobe overlay
sudo modprobe br_netfilter

# Set up required sysctl params
sudo tee /etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

# Reload sysctl
sudo sysctl --system

# Add Cri-o repo
sudo -i
OS="xUbuntu_22.04"
VERSION=1.26
echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
echo "deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.list
curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/Release.key | apt-key add -
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | apt-key add -

# Install CRI-O
sudo apt update
sudo apt install cri-o cri-o-runc

# Start and enable Service
sudo systemctl daemon-reload
sudo systemctl restart crio
sudo systemctl enable crio
systemctl status crio

3) Installing Containerd

You can also use containerd instead of Docker, this is preferred method for Docker Engine users.

# Configure persistent loading of modules
sudo tee /etc/modules-load.d/k8s.conf <<EOF
overlay
br_netfilter
EOF

# Load at runtime
sudo modprobe overlay
sudo modprobe br_netfilter

# Ensure sysctl params are set
sudo tee /etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

# Reload configs
sudo sysctl --system

# Install required packages
sudo apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates

# Add Docker repo
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/docker-archive-keyring.gpg
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

# Install containerd
sudo apt update
sudo apt install -y containerd.io

# Configure containerd and start service
sudo mkdir -p /etc/containerd
sudo containerd config default|sudo tee /etc/containerd/config.toml

# restart containerd
sudo systemctl restart containerd
sudo systemctl enable containerd
systemctl status containerd

To use the systemd cgroup driver, set plugins.cri.systemd_cgroup = true in /etc/containerd/config.toml. When using kubeadm, manually configure the cgroup driver for kubelet

5. Initialize control plane (run on first master node)

Login to the server to be used as master and make sure that the br_netfilter module is loaded:

$ lsmod | grep br_netfilter
br_netfilter           22256  0 
bridge                151336  2 br_netfilter,ebtable_broute

Enable kubelet service.

sudo systemctl enable kubelet

We now want to initialize the machine that will run the control plane components which includes etcd (the cluster database) and the API Server.

Pull container images:

$ sudo kubeadm config images pull
[config/images] Pulled registry.k8s.io/kube-apiserver:v1.28.0
[config/images] Pulled registry.k8s.io/kube-controller-manager:v1.28.0
[config/images] Pulled registry.k8s.io/kube-scheduler:v1.28.0
[config/images] Pulled registry.k8s.io/kube-proxy:v1.28.0
[config/images] Pulled registry.k8s.io/pause:3.9
[config/images] Pulled registry.k8s.io/etcd:3.5.9-0
[config/images] Pulled registry.k8s.io/coredns/coredns:v1.10.1

If you have multiple CRI sockets, please use --cri-socket to select one:

# CRI-O
sudo kubeadm config images pull --cri-socket /var/run/crio/crio.sock

# Containerd
sudo kubeadm config images pull --cri-socket /run/containerd/containerd.sock

# Docker
sudo kubeadm config images pull --cri-socket /run/cri-dockerd.sock 

These are the basic kubeadm init options that are used to bootstrap cluster.

--control-plane-endpoint :  set the shared endpoint for all control-plane nodes. Can be DNS/IP
--pod-network-cidr : Used to set a Pod network add-on CIDR
--cri-socket : Use if have more than one container runtime to set runtime socket path
--apiserver-advertise-address : Set advertise address for this particular control-plane node's API server

Bootstrap without shared endpoint

To bootstrap a cluster without using DNS endpoint, run:

sudo kubeadm init \
  --pod-network-cidr=10.244.0.0/16

Bootstrap with shared endpoint (DNS name for control plane API)

Set cluster endpoint DNS name or add record to /etc/hosts file.

$ sudo vim /etc/hosts
172.29.20.5 k8s-cluster.neveropen.co.za

Create cluster:

sudo kubeadm init \
  --pod-network-cidr=10.244.0.0/16 \
  --upload-certs \
  --control-plane-endpoint=k8sapi.neveropen.co.za

Note: If 10.244.0.0/16 is already in use within your network you must select a different pod network CIDR, replacing 10.244.0.0/16 in the above command.

Container runtime sockets:

Runtime Path to Unix domain socket
Docker /run/cri-dockerd.sock
containerd /run/containerd/containerd.sock
CRI-O /var/run/crio/crio.sock

You can optionally pass Socket file for runtime and advertise address depending on your setup.

# CRI-O
sudo kubeadm init \
  --pod-network-cidr=10.244.0.0/16 \
  --cri-socket /var/run/crio/crio.sock \
  --upload-certs \
  --control-plane-endpoint=k8s-cluster.neveropen.co.za

# Containerd
sudo kubeadm init \
  --pod-network-cidr=10.244.0.0/16 \
  --cri-socket /run/containerd/containerd.sock \
  --upload-certs \
  --control-plane-endpoint=k8s-cluster.neveropen.co.za

# Docker
# Must do https://neveropen.co.za/install-mirantis-cri-dockerd-as-docker-engine-shim-for-kubernetes/
sudo kubeadm init \
  --pod-network-cidr=10.244.0.0/16 \
  --cri-socket /run/cri-dockerd.sock  \
  --upload-certs \
  --control-plane-endpoint=k8s-cluster.neveropen.co.za

Here is the output of my initialization command.

....
[init] Using Kubernetes version: v1.26.1
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: missing optional cgroups: blkio
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-cluster.neveropen.co.za k8smas01.neveropen.co.za kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.10]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8smas01.neveropen.co.za localhost] and IPs [192.168.1.10 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8smas01.neveropen.co.za localhost] and IPs [192.168.1.10 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 11.005078 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
62c529b867e160f6d67cfe691838377e553c4ec86a9f0d6b6b6f410d79ce8253
[mark-control-plane] Marking the node k8smas01.home.cloudlabske.io as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8smas01.home.cloudlabske.io as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: wbgkcz.cplewrbyxgadq8nu
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join k8s-cluster.neveropen.co.za:6443 --token sr4l2l.2kvot0pfalh5o4ik \
    --discovery-token-ca-cert-hash sha256:1d2c7d5a14c5d717a676291ff5ac25e041386e6371d56bb5cc82d5907fdf62a1 \
    --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join k8s-cluster.neveropen.co.za:6443 --token sr4l2l.2kvot0pfalh5o4ik \
    --discovery-token-ca-cert-hash sha256:c692fb047e15883b575bd6710779dc2c5af8073f7cab460abd181fd3ddb29a18

Configure kubectl using commands in the output:

mkdir -p $HOME/.kube
sudo cp -f /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Check cluster status:

$ kubectl cluster-info
Kubernetes master is running at https://k8s-cluster.neveropen.co.za:6443
KubeDNS is running at https://k8s-cluster.neveropen.co.za:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Additional Master nodes can be added using the command in installation output:

kubeadm join k8s-cluster.neveropen.co.za:6443 --token sr4l2l.2kvot0pfalh5o4ik \
    --discovery-token-ca-cert-hash sha256:c692fb047e15883b575bd6710779dc2c5af8073f7cab460abd181fd3ddb29a18 \
    --control-plane 

6. Install Kubernetes network plugin

In this guide we’ll use Flannel network plugin. You can choose any other supported network plugins. Flannel is a simple and easy way to configure a layer 3 network fabric designed for Kubernetes.

Download installation manifest.

wget https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml

If your Kubernetes installation is using custom podCIDR (not 10.244.0.0/16) you need to modify the network to match your one in downloaded manifest.

$ vim kube-flannel.yml
net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }

Then install Flannel by creating the necessary resources.

$ kubectl apply -f kube-flannel.yml
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

Confirm that all of the pods are running, it may take seconds to minutes before it’s ready.

$ kubectl get pods -n kube-flannel
NAME                    READY   STATUS    RESTARTS   AGE
kube-flannel-ds-pppw4   1/1     Running   0          2m16s

Confirm master node is ready:

kubectl get nodes -o wide

7. Add worker nodes

With the control plane ready you can add worker nodes to the cluster for running scheduled workloads.

If endpoint address is not in DNS, add record to /etc/hosts.

$ sudo vim /etc/hosts
172.29.20.5 k8s-cluster.neveropen.co.za

The join command that was given is used to add a worker node to the cluster.

kubeadm join k8s-cluster.neveropen.co.za:6443 \
  --token sr4l2l.2kvot0pfalh5o4ik \
  --discovery-token-ca-cert-hash sha256:c692fb047e15883b575bd6710779dc2c5af8073f7cab460abd181fd3ddb29a18

Output:

[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.21" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run below command on the control-plane to see if the node joined the cluster.

$ kubectl get nodes
NAME                                 STATUS   ROLES    AGE   VERSION
k8s-master01.neveropen.co.za   Ready    master   10m   v1.26.1
k8s-worker01.neveropen.co.za   Ready    <none>   50s   v1.26.1
k8s-worker02.neveropen.co.za   Ready    <none>   12s   v1.26.1

$ kubectl get nodes -o wide

Generating join token

If the join token is expired, refer to our guide on how to join worker nodes by creating a new token.

8. Deploy test application on cluster

If you only have a single node cluster, check our guide on how to run container pods on master nodes:

Let’s deploy a test application to confirm the cluster is functional.

kubectl apply -f https://k8s.io/examples/pods/commands.yaml

Check to see if pod started

$ kubectl get pods
NAME           READY   STATUS      RESTARTS   AGE
command-demo   0/1     Completed   0          16s

9. Install Kubernetes Dashboard (Optional)

Kubernetes dashboard can be used to deploy containerized applications to a Kubernetes cluster, troubleshoot your containerized application, and manage the cluster resources.

Refer to our guide for installation: How To Install Kubernetes Dashboard with NodePort

10. Install Metrics Server (optional but recommended)

Metrics Server is a cluster-wide aggregator of resource usage data. It collects metrics from the Summary API, exposed by Kubelet on each node. Use our guide below to deploy it:

11. Deploy Prometheus / Grafana Monitoring

Prometheus is a full fledged solution that enables you to access advanced metrics capabilities in a Kubernetes cluster. Grafana is used for analytics and interactive visualization of metrics that’s collected and stored in Prometheus database. We have a complete guide on how to setup complete monitoring stack on Kubernetes Cluster:

13. Persistent Storage Configuration ideas (Optional)

If you’re also looking for a Persistent storage solution for your Kubernetes, checkout:

14. Install Ingress Controller in Kubernetes

Next you can deploy an Ingress controller for Kubernetes workloads, you can use our guide for the installation process:

15. Deploy MetalLB on Kubernetes

Follow the guide below to install and configure MetalLB on Kubernetes:

Books For Learning Kubernetes Administration:

More guides:

Similar Kubernetes deployment guides:

Nicole Veronica Rubhabha
Nicole Veronica Rubhabha
A highly competent and organized individual DotNet developer with a track record of architecting and developing web client-server applications. Recognized as a personable, dedicated performer who demonstrates innovation, communication, and teamwork to ensure quality and timely project completion. Expertise in C#, ASP.Net, MVC, LINQ, EF 6, Web Services, SQL Server, MySql, Web development,
RELATED ARTICLES

Most Popular

Recent Comments