Saturday, December 14, 2024
Google search engine
HomeUncategorisedInstall and Use KubeSphere on existing Kubernetes cluster

Install and Use KubeSphere on existing Kubernetes cluster

.tdi_3.td-a-rec{text-align:center}.tdi_3 .td-element-style{z-index:-1}.tdi_3.td-a-rec-img{text-align:left}.tdi_3.td-a-rec-img img{margin:0 auto 0 0}@media(max-width:767px){.tdi_3.td-a-rec-img{text-align:center}}

We can all agree that containerization technology has been highly adopted in the past decade. This is due to the fact that almost all organizations around the world have decided to modernize their existing applications for cloud use. Containerization is the packaging software code with all the required dependencies in a lightweight executable. Containers can be run using tools like docker, Podman, Kubernetes e.t.c

Kubernetes(k8s) is a free and open-source container orchestration tool that works by distributing the workload across a cluster of hosts. It also automates the container networking needs, storage, and persistent volumes while maintaining the desired container state.

When a Kubernetes environment grows, it can be hard to manage. This creates a high need for Kubernetes management tools. The available tools in the market include K9s, Weave scope, Dashboard + Kubectl + Kubeadm, KubeSpray, Kontena Lens, WKSctl, Rancher, PortainerHeadlamp, Konstellate e.t.c

.tdi_2.td-a-rec{text-align:center}.tdi_2 .td-element-style{z-index:-1}.tdi_2.td-a-rec-img{text-align:left}.tdi_2.td-a-rec-img img{margin:0 auto 0 0}@media(max-width:767px){.tdi_2.td-a-rec-img{text-align:center}}

KubeSphere is a distributed operating system that has a commitment to providing a plug-and-play architecture for cloud-native application management with Kubernetes as its kernel. This multi-tenant enterprise-grade platform has full-stack automated IT operations and streamlined DevOps workflows. It has a user-friendly web UI to help users build robust and feature-rich platforms. It also makes common functions such as resource management, DevOps (CI/CD), monitoring, logging, service mesh, multi-tenancy, auditing, alerting and notification, application lifecycle management, multi-cluster deployment, access control, storage and networking, autoscaling, registry management, and security management more effortless.

One of the fantastic features is that it allows third-party applications to integrate seamlessly into its ecosystem. Other features include:

  • Open Source: with its open-source model, developments can be easily enhanced. Its source code is available on GitHub 
  • O&M Friendly: It hides the details of underlying infrastructure and helps modernize migrate, deploy and manage existing deployments
  • Run KubeSphere Everywhere: this lightweight platform is more friendly to different cloud ecosystems. It can be installed on cloud platforms such as Alibaba Cloud, QingCloud, Tencent Cloud, AWS, Huawei Cloud e.t.c
  • Landscape: KubeSphere is a member of the Kubernetes Conformance Certified platform and enriches the CNCF CLOUD NATIVE Landscape.

Today, we will learn how to install and use KubeSphere on an existing Kubernetes cluster.

Prerequisites

For this guide to work best, you need to have the following:

Also, ensure that kubectl is installed.

curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin

To be able to use the tool, export the admin config:

##For RKE2
export PATH=$PATH:/var/lib/rancher/rke2/bin export KUBECONFIG=/etc/rancher/rke2/rke2.yaml

##For K0s
export KUBECONFIG=/var/lib/k0s/pki/admin.conf

Check the available nodes in the cluster

$ kubectl get nodes
NAME      STATUS   ROLES           AGE     VERSION
master    Ready    control-plane   3m38s   v1.25.2+k0s
worker1   Ready    <none>          100s    v1.25.2+k0s
worker2   Ready    <none>          92s     v1.25.2+k0s

Step 1 – Create Persistent Volumes For KubeSphere

One of the requirements of KubeSphere is to have a persistent volume set-up. We need to create a storage class in the cluster and set it as the default storage class.

Refer to our articles below on how to configure persistent storage for your Kubernetes cluster:

Check if you have storage class configured and working.

$ kubectl get sc
NAME                   PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
local-path (default)   rancher.io/local-path   Delete          WaitForFirstConsumer   false                  13m

$ kubectl get sc
NAME                      PROVISIONER                    RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
kubesphere-sc (default)   kubernetes.io/no-provisioner   Delete          WaitForFirstConsumer   false                  2m31s

Once the configurations have been made accordingly, apply the manifest.

kubectl create -f Kubevious-pv.yml

Step 2 – Deploy KubeSphere on Kubernetes

Once the cluster meets all the requirements, you can install KubeSphere. In this guide, we will use kubectl to install KubeSphere with the default minimal package.

Download installation manifest files.

VER=$( curl --silent "https://api.github.com/repos/kubesphere/ks-installer/releases/latest"| grep '"tag_name"'|sed -E 's/.*"([^"]+)".*/\1/')
wget https://github.com/kubesphere/ks-installer/releases/download/$VER/kubesphere-installer.yaml
wget https://github.com/kubesphere/ks-installer/releases/download/$VER/cluster-configuration.yaml

Apply to create kubernetes objects:

$ kubectl apply -f kubesphere-installer.yaml
customresourcedefinition.apiextensions.k8s.io/clusterconfigurations.installer.kubesphere.io created
namespace/kubesphere-system created
serviceaccount/ks-installer created
clusterrole.rbac.authorization.k8s.io/ks-installer created
clusterrolebinding.rbac.authorization.k8s.io/ks-installer created
deployment.apps/ks-installer created

$ kubectl apply -f cluster-configuration.yaml
clusterconfiguration.installer.kubesphere.io/ks-installer created

After running the two commands, you will have a namespace kubesphere-system created and several other resources started. You can follow through the deployment with the command:

kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f

Sample Output:

....
TASK [ks-core/prepare : KubeSphere | Getting installation init files] **********
changed: [localhost] => (item=ks-init)

TASK [ks-core/prepare : KubeSphere | Initing KubeSphere] ***********************
changed: [localhost] => (item=role-templates.yaml)

TASK [ks-core/prepare : KubeSphere | Generating kubeconfig-admin] **************
skipping: [localhost]

PLAY RECAP *********************************************************************
localhost                  : ok=29   changed=21   unreachable=0    failed=0    skipped=16   rescued=0    ignored=0   
Start installing monitoring
Start installing multicluster
Start installing openpitrix
Start installing network
**************************************************
Waiting for all tasks to be completed ...
task network status is successful  (1/4)
task openpitrix status is successful  (2/4)
task multicluster status is successful  (3/4)
....

Check if all pods are running:

kubectl get pod --all-namespaces

Sample Output:

Install and Use KubeSphere

You can verify the status of the created PVs:

$ kubectl get pv
NAME             CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                                             STORAGECLASS    REASON   AGE
kubesphere-pv    20Gi       RWO            Retain           Bound    kubesphere-monitoring-system/prometheus-k8s-db-prometheus-k8s-0   kubesphere-sc            27m
kubesphere-pv2   20Gi       RWO            Retain           Bound    kubesphere-monitoring-system/prometheus-k8s-db-prometheus-k8s-1   kubesphere-sc            15m

From the output, the two PVs have been bound. This means that they are under use. You can also view PVCs created:

$ kubectl get pvc -A
NAMESPACE                      NAME                                 STATUS   VOLUME           CAPACITY   ACCESS MODES   STORAGECLASS    AGE
kubesphere-monitoring-system   prometheus-k8s-db-prometheus-k8s-0   Bound    kubesphere-pv    20Gi       RWO            kubesphere-sc   26m
kubesphere-monitoring-system   prometheus-k8s-db-prometheus-k8s-1   Bound    kubesphere-pv2   20Gi       RWO            kubesphere-sc   25m

Now we can proceed and get the port for the ks-console. The default port is 30880

$ kubectl get svc/ks-console -n kubesphere-system
NAME         TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
ks-console   NodePort   10.110.224.204   <none>        80:30880/TCP   30m

We have confirmed that we have a NodePort service running on port 30880. You might need to allow this port through the firewall.

Step 3 – Access KubeSphere Web UI

Using the port, you can then access the KubeSphere Web UI using the URL http://node_IP:30880

Install and Use KubeSphere 1

Login using the default account and password (admin/P@88w0rd). You will be required to set a preferred password. Thereafter, you will be authenticated to the below dashboard.

Install and Use KubeSphere 2

Now there are several ways to proceed. We will first look at the Platform information. We have one cluster configured as shown:

Install and Use KubeSphere 3

You can view the available nodes and their health.

Install and Use KubeSphere 4

View metrics for the various resources under monitoring and alerting.

Install and Use KubeSphere 5

You can also view the workspaces and the projects in the workspaces:

Install and Use KubeSphere 6

Navigate into any of the workspaces and view the workloads, storage, configurations etc.

Install and Use KubeSphere 7

Step 4 – Deploy an Ingress Controller Using KubeSphere

To deploy an Ingress Controller, first create a gateway under cluster settings as shown:

Install and Use KubeSphere 8

Once created, we need to get a Load Balancer IP. This can be achieved by first installing MetalLB on the Kubernetes cluster:

Once applied, you should have a Loadbalancer IP address for the gateway:

 $ kubectl get svc -A
NAMESPACE                      NAME                                          TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)                        AGE
default                        kubernetes                                    ClusterIP      10.96.0.1        <none>           443/TCP                        105m
kube-system                    kube-dns                                      ClusterIP      10.96.0.10       <none>           53/UDP,53/TCP,9153/TCP         104m
kube-system                    kubelet                                       ClusterIP      None             <none>           10250/TCP,10255/TCP,4194/TCP   85m
kube-system                    metrics-server                                ClusterIP      10.99.27.215     <none>           443/TCP                        104m
kubesphere-controls-system     default-http-backend                          ClusterIP      10.106.36.230    <none>           80/TCP                         88m
kubesphere-controls-system     kubesphere-router-kubesphere-system           LoadBalancer   10.109.128.248   192.168.205.40   80:30422/TCP,443:30652/TCP     8m13s
kubesphere-controls-system     kubesphere-router-kubesphere-system-metrics   ClusterIP      10.102.198.248   <none>           10254/TCP                      8m13s
.....

Now proceed and create an ingress under application workloads.

Install and Use KubeSphere 9

Add a sample routing rule for the KubeSphere console service:

Install and Use KubeSphere 10

Finish creating the ingress controller.

Install and Use KubeSphere 11

After mapping the Load Balancer IP address to the domain name in /etc/hosts, try accessing the service to verify if the Ingress Controller is working:

Install and Use KubeSphere 12

Books For Learning Kubernetes Administration:

Verdict

Today, we have learned how to install and use KubeSphere on an existing Kubernetes cluster. We have only covered a few use scenarios for KubeSphere. We can now agree that simplifies the complexity of Kubernetes and makes it more accessible to developers and other users. It allows users to easily manage multiple clusters and deploy applications with just a few clicks. It also provides tools for monitoring and scaling applications, which helps to ensure that they are running smoothly and efficiently.

Interested in more?

.tdi_4.td-a-rec{text-align:center}.tdi_4 .td-element-style{z-index:-1}.tdi_4.td-a-rec-img{text-align:left}.tdi_4.td-a-rec-img img{margin:0 auto 0 0}@media(max-width:767px){.tdi_4.td-a-rec-img{text-align:center}}

RELATED ARTICLES

Most Popular

Recent Comments