Kubespray uses both Ansible and Kubeadm to deploy Kubernetes cluster in Virtual Machines or dedicated servers. Being composable, it allows you to choose from a wide range of options during deployment such as Linux distribution, network plugins, container runtimes, e.t.c. With Kubespray you can perform installation in cloud platforms such as Amazon EC2 (AWS), Azure, Google Cloud, and private cloud platforms like OpenStack.
This brief article is created to help you with upgrading of your Kubernetes cluster deployed using Kubespray. For new installation check out our article in the link below.
And for only adding a new node into the cluster we have: Adding a New Node into Kubernetes Cluster using Kubespray
Our Kubernetes version before upgrade.
$ kubectl get node
NAME STATUS ROLES AGE VERSION
master01 Ready control-plane 230d v1.24.6
master02 Ready control-plane 230d v1.24.6
master03 Ready control-plane 223d v1.24.6
node01 Ready <none> 230d v1.24.6
node02 Ready <none> 230d v1.24.6
node03 Ready <none> 230d v1.24.6
node04 Ready <none> 223d v1.24.6
node05 Ready <none> 223d v1.24.6
node06 Ready <none> 8d v1.24.6
Backup current configurations
Our inventory directory for Kubespray is kubespray/inventory/k8scluster/
. We’ll copy the contents of three main files for later reference when updating parameters.
mkdir ~/kubespray-backups
cp kubespray/inventory/k8scluster/group_vars/k8s_cluster/k8s-cluster.yml ~/kubespray-backups
cp kubespray/inventory/k8scluster/group_vars/all/all.yml ~/kubespray-backups
cp kubespray/inventory/k8scluster/inventory.ini ~/kubespray-backups
In our previous deployment below values were use. Check if you had customizations and save for later use.
$ vim kubespray/inventory/k8scluster/group_vars/k8s_cluster/k8s-cluster.yml
cluster_name: k8s.example.com
kube_network_plugin: flannel
container_manager: crio
$ vim kubespray/inventory/k8scluster/group_vars/all/all.yml
bin_dir: /opt/bin #because the OS is Flatcar container Linux
apiserver_loadbalancer_domain_name: api.k8s.example.com
loadbalancer_apiserver:
address: 192.168.1.8
port: 6443
Clone Kubespray source if does’t exist
If you don’t have kubespray source locally with latest source, clone it.
git clone https://github.com/kubernetes-sigs/kubespray.git
cd kubespray
From my cluster status nodes, we can confirm the version of Kubernetes is v1.24.6. This was deployed from tag v2.20.0.
You can check official Kubespray git releases or tags page. To list available tags in the git repository run:
$ git tag --list --sort=version:refname
...
v2.19.0
v2.19.1
v2.20.0
v2.21.0
v2.22.0
v2.22.1
Or sort from latest tag.
$ git tag -l --sort=-version:refname
v2.22.1
v2.22.0
v2.21.0
v2.20.0
v2.19.1
v2.19.0
v2.18.2
v2.18.1
v2.18.0
....
$ git tag -l | sort -V --reverse
v2.22.1
v2.22.0
v2.21.0
v2.20.0
v2.19.1
v2.19.0
v2.18.2
v2.18.1
v2.18.0
...
From the output we can confirm the next available release tag is v2.21.0
. Our initial upgrade will be from its source.
For git branches use:
$ git branch --list --remotes --sort=-version:refname
origin/release-2.22
origin/release-2.21
origin/release-2.20
origin/release-2.19
origin/release-2.18
origin/release-2.17
origin/release-2.16
origin/release-2.15
origin/release-2.14
origin/release-2.13
origin/release-2.12
origin/release-2.11
origin/release-2.10
origin/release-2.9
origin/release-2.8
origin/release-2.7
origin/pre-commit-hook
origin/master
origin/floryut-patch-1
origin/HEAD -> origin/master
Upgrade Kubernetes cluster using Kubespray
Let’s update files in the working tree to match the release we are upgrading to. For me this is tag 2.21.0.
$ git checkout v2.21.0
D inventory/sample/group_vars/all/aws.yml
D inventory/sample/group_vars/all/azure.yml
D inventory/sample/group_vars/all/containerd.yml
D inventory/sample/group_vars/all/coreos.yml
D inventory/sample/group_vars/all/cri-o.yml
D inventory/sample/group_vars/all/docker.yml
D inventory/sample/group_vars/all/gcp.yml
D inventory/sample/group_vars/all/hcloud.yml
D inventory/sample/group_vars/all/oci.yml
D inventory/sample/group_vars/all/vsphere.yml
D inventory/sample/group_vars/etcd.yml
D inventory/sample/group_vars/k8s_cluster/k8s-net-calico.yml
D inventory/sample/group_vars/k8s_cluster/k8s-net-flannel.yml
D inventory/sample/group_vars/k8s_cluster/k8s-net-kube-ovn.yml
D inventory/sample/group_vars/k8s_cluster/k8s-net-kube-router.yml
D inventory/sample/group_vars/k8s_cluster/k8s-net-macvlan.yml
D inventory/sample/group_vars/k8s_cluster/k8s-net-weave.yml
D inventory/sample/inventory.ini
HEAD is now at 2cf23e310 Don't search filesystem mounts in docker build step (#10131) (#10194)
$ git describe --tags
v2.21.0
We can also use release number with git checkout
command.
git checkout release-2.21
Let’s copy inventory sample to
cp -rfp inventory/sample inventory/k8scluster
Review the inventory files and variables before performing an upgrade. Set them to match your current installation to avoid any issue after upgrading the cluster.
inventory/k8scluster/inventory.ini
inventory/k8scluster/group_vars/all/all.yml
inventory/k8scluster/group_vars/k8s_cluster/k8s-cluster.yml
Contents of my inventory.ini
file.
[all]
master01 ansible_host=192.168.1.10 etcd_member_name=etcd1 ansible_user=core
master02 ansible_host=192.168.1.11 etcd_member_name=etcd2 ansible_user=core
master03 ansible_host=192.168.1.12 etcd_member_name=etcd3 ansible_user=core
node01 ansible_host=192.168.1.13 etcd_member_name= ansible_user=core
node02 ansible_host=192.168.1.14 etcd_member_name= ansible_user=core
node03 ansible_host=192.168.1.15 etcd_member_name= ansible_user=core
node04 ansible_host=192.168.1.16 etcd_member_name= ansible_user=core
node05 ansible_host=192.168.1.17 etcd_member_name= ansible_user=core
# ## configure a bastion host if your nodes are not directly reachable
# [bastion]
# bastion ansible_host=x.x.x.x ansible_user=some_user
[kube_control_plane]
master01
master02
master03
[etcd]
master01
master02
master03
[kube_node]
node01
node02
node03
node04
node05
[new_nodes]
node04
node05
[calico_rr]
[k8s_cluster:children]
kube_control_plane
kube_node
calico_rr
Once done initiate an upgrade of your cluster by running the following commands.
ansible-playbook -i inventory/k8scluster/inventory.ini -b upgrade-cluster.yml
Extra options to use if not set permanently in inventory file:
-e ansible_user=rocky
--become-user root
To limit upgrade to one node use --limit=nodename
If everything goes well, you’ll get output similar to below after successful upgrade.
List your nodes after upgrade to see runtime version & Kubernetes version numbers.
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master01 Ready control-plane 232d v1.25.0 192.168.1.10 <none> Flatcar Container Linux by Kinvolk 3510.2.5 (Oklo) 5.15.119-flatcar containerd://1.6.15
master02 Ready control-plane 232d v1.25.0 192.168.1.11 <none> Flatcar Container Linux by Kinvolk 3510.2.5 (Oklo) 5.15.119-flatcar containerd://1.6.15
master03 Ready control-plane 225d v1.25.0 192.168.1.12 <none> Flatcar Container Linux by Kinvolk 3510.2.5 (Oklo) 5.15.119-flatcar containerd://1.6.15
node01 Ready <none> 232d v1.25.0 192.168.1.13 <none> Flatcar Container Linux by Kinvolk 3510.2.5 (Oklo) 5.15.119-flatcar containerd://1.6.15
node02 Ready <none> 232d v1.25.0 192.168.1.14 <none> Flatcar Container Linux by Kinvolk 3510.2.5 (Oklo) 5.15.119-flatcar containerd://1.6.15
node03 Ready <none> 232d v1.25.0 192.168.1.15 <none> Flatcar Container Linux by Kinvolk 3510.2.5 (Oklo) 5.15.119-flatcar containerd://1.6.15
node04 Ready <none> 225d v1.25.0 192.168.1.16 <none> Flatcar Container Linux by Kinvolk 3510.2.5 (Oklo) 5.15.119-flatcar containerd://1.6.15
node05 Ready <none> 225d v1.25.0 192.168.1.17 <none> Flatcar Container Linux by Kinvolk 3510.2.5 (Oklo) 5.15.119-flatcar containerd://1.6.15
node06 Ready <none> 9d v1.25.0 192.168.1.18 <none> Flatcar Container Linux by Kinvolk 3510.2.5 (Oklo) 5.15.119-flatcar containerd://1.6.15
Running kubectl version
will display the Server version after update.
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.1", GitCommit:"fcf512e2763f3b98bcc8e3fb087cd8cb80f8ca83", GitTreeState:"clean", BuildDate:"2022-08-15T05:48:10Z", GoVersion:"go1.18.4", Compiler:"gc", Platform:"darwin/amd64"}
Kustomize Version: v4.5.4
Server Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.0", GitCommit:"a866cbe2e5bbaa01cfd5e969aa3e033f3282a8a2", GitTreeState:"clean", BuildDate:"2022-08-23T17:38:15Z", GoVersion:"go1.19", Compiler:"gc", Platform:"linux/amd64"}
Performing multiple upgrades (after first success)
Checkout to next release / tag.
$ git checkout v2.22.0
Previous HEAD position was c4346e590 kubeadm/etcd: use config to download certificate (#9609)
HEAD is now at 4014a1ccc fix multus include (#10105)
$ git branch
* (HEAD detached at v2.22.0)
master
Update inventory directory that contains all settings for the deployment.
mv inventory/k8scluster{,.bak}
cp -rfp inventory/sample inventory/k8scluster
The perform cluster upgrade
ansible-playbook -i inventory/k8scluster/inventory.ini -b upgrade-cluster.yml
Check next Kubernetes version after the upgrade.
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master01 Ready control-plane 233d v1.26.5 192.168.1.10 <none> Flatcar Container Linux by Kinvolk 3510.2.5 (Oklo) 5.15.119-flatcar containerd://1.7.1
master02 Ready control-plane 233d v1.26.5 192.168.1.11 <none> Flatcar Container Linux by Kinvolk 3510.2.5 (Oklo) 5.15.119-flatcar containerd://1.7.1
master03 Ready control-plane 226d v1.26.5 192.168.1.12 <none> Flatcar Container Linux by Kinvolk 3510.2.5 (Oklo) 5.15.119-flatcar containerd://1.7.1
node01 Ready <none> 233d v1.26.5 192.168.1.13 <none> Flatcar Container Linux by Kinvolk 3510.2.5 (Oklo) 5.15.119-flatcar containerd://1.7.1
node02 Ready <none> 233d v1.26.5 192.168.1.14 <none> Flatcar Container Linux by Kinvolk 3510.2.5 (Oklo) 5.15.119-flatcar containerd://1.7.1
node03 Ready <none> 233d v1.26.5 192.168.1.15 <none> Flatcar Container Linux by Kinvolk 3510.2.5 (Oklo) 5.15.119-flatcar containerd://1.7.1
node04 Ready <none> 226d v1.26.5 192.168.1.16 <none> Flatcar Container Linux by Kinvolk 3510.2.5 (Oklo) 5.15.119-flatcar containerd://1.7.1
node05 Ready <none> 226d v1.26.5 192.168.1.17 <none> Flatcar Container Linux by Kinvolk 3510.2.5 (Oklo) 5.15.119-flatcar containerd://1.7.1
node06 Ready <none> 10d v1.26.5 192.168.1.18 <none> Flatcar Container Linux by Kinvolk 3510.2.5 (Oklo) 5.15.119-flatcar containerd://1.7.1
Conclusion
In a matter of minutes, and with few commands, we have been able to upgrade our Kubernetes cluster using Kubespray. Note that to use this guide you need a Kubernetes cluster that is fully functional, and was deployed using Kubespray. Kubespray is a powerful automation tool that is highly adaptable, configurable, and extensible. Kubespray incorporates operations and security practices and enables you to focus cluster administration and focus more on building your applications.