Friday, December 27, 2024
Google search engine
HomeGuest BlogsUpgrade Kubernetes Cluster on OpenStack Magnum

Upgrade Kubernetes Cluster on OpenStack Magnum

How can I upgrade a Kubernetes Cluster powered by OpenStack Magnum?. For a managed Kubernetes service such as one on Magnum orchestration engine, a rolling upgrade is an important feature a user may want. The main dependency for this article is a working Kubernetes Cluster deployed on Openstack using Magnum.

Please note that the Kubernetes version upgrade is only supported by the Fedora Atomic and the Fedora CoreOS drivers. This is because of the design around the base OS, able to withstand failures as a result of automated updates. My cluster uses Fedora CoreOS as the base operating system:

$ cat /etc/os-release
NAME=Fedora
VERSION="34.20210427.3.0 (CoreOS)"
ID=fedora
VERSION_ID=34
VERSION_CODENAME=""
PLATFORM_ID="platform:f34"
PRETTY_NAME="Fedora CoreOS 34.20210427.3.0"
ANSI_COLOR="0;38;2;60;110;180"
LOGO=fedora-logo-icon
CPE_NAME="cpe:/o:fedoraproject:fedora:34"
HOME_URL="https://getfedora.org/coreos/"
DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora-coreos/"
SUPPORT_URL="https://github.com/coreos/fedora-coreos-tracker/"
BUG_REPORT_URL="https://github.com/coreos/fedora-coreos-tracker/"
REDHAT_BUGZILLA_PRODUCT="Fedora"
....

The kube_tag label allows users to select a specific Kubernetes release based on its container tag for Fedora CoreOS image. If this label is unset, the current Magnum version’s default Kubernetes release is used during cluster provisioning. You can have a look at the Magnum and Kubernetes releases compatibility matrix.

Step 1: Identify the current Kubernetes version

Get current Kubernetes version:

$ kubectl version --short
Server Version: v1.18.2

We’ll upgrade our cluster by following below steps:

  1. Create a new Magnum Kubernetes Cluster Template – This should be similar to previous template only that kube_tag label references a newer version of Kubernetes.
  2. Initiate cluster rolling upgrade

Step 2: Create new Cluster Template with Upgrade version

Go through the Magnum and Kubernetes releases compatibility matrix to have a clear understanding on the supported versions in your OpenStack Magnum installation. As my setup is based on Victoria, I should be able to upgrade from version 1.18.2 to 1.18.9.

The Github releases page has all Kubernetes releases notes if you need to get more details on a minor version.

My Cluster was deployed from the following Template:

# Cluster Template Creation
openstack coe cluster template create k8s-cluster-template-v1.18.2 \
   --image Fedora-CoreOS-34 \
   --keypair admin \
   --external-network public \
   --fixed-network private \
   --fixed-subnet private_subnet \
   --dns-nameserver 8.8.8.8 \
   --flavor m1.medium \
   --master-flavor m1.medium \
   --volume-driver cinder \
   --docker-volume-size 5 \
   --network-driver calico \
   --docker-storage-driver overlay2 \
   --coe kubernetes \
   --labels kube_tag=v1.18.2

# Initial Cluster Creation
openstack coe cluster create k8s-cluster-02 \
    --cluster-template k8s-cluster-template-v1.18.2 \
    --master-count 1 \
    --node-count 1

If you get an error “forbidden: PodSecurityPolicy: unable to admit pod: []” with some pods starting after cluster creation consider adding below labels:

--labels admission_control_list="NodeRestriction,NamespaceLifecycle,Limi

Confirm the cluster that was created is complete and in healthy state before proceeding with the upgrade:

$ openstack coe cluster list  -f json
[
  {
    "uuid": "48eb36b9-7f8b-4442-8637-bebcf078ca8b",
    "name": "k8s-cluster-01",
    "keypair": "admin",
    "node_count": 2,
    "master_count": 1,
    "status": "CREATE_COMPLETE",
    "health_status": "HEALTHY"
  },
  {
    "uuid": "e5ebf8aa-38f0-4082-a665-5bdb4f4769f9",
    "name": "k8s-cluster-02",
    "keypair": "admin",
    "node_count": 1,
    "master_count": 1,
    "status": "CREATE_COMPLETE",
    "health_status": "HEALTHY"
  }
]

Create new cluster template with updated version. Mine is as below:

openstack coe cluster template create k8s-cluster-template-v1.18.9 \
   --image Fedora-CoreOS-34 \
   --keypair admin \
   --external-network public \
   --fixed-network private \
   --fixed-subnet private_subnet \
   --dns-nameserver 8.8.8.8 \
   --flavor m1.medium \
   --master-flavor m1.medium \
   --volume-driver cinder \
   --docker-volume-size 5 \
   --network-driver calico \
   --docker-storage-driver overlay2 \
   --coe kubernetes \
   --labels kube_tag=v1.18.9

Confirm creation was successful:

$ openstack coe cluster template list -f json
openstack coe cluster template list -f json
[
  {
    "uuid": "b05dcb03-07a7-4b66-beee-42383ff16e9b",
    "name": "k8s-cluster-template"
  },
  {
    "uuid": "77cc9112-b7ba-4531-9be5-6923528cd0eb",
    "name": "k8s-cluster-template-v1.18.2"
  },
  {
    "uuid": "cc33f457-866a-440f-ac78-6c3be713ef73",
    "name": "k8s-cluster-template-v1.18.9"
  }
]

Key Notes:

  • The highest version you can upgrade to in OpenStack Victoria and below releases is 1.18.9. This is because official Hyperkube images have been discontinued for kube_tag greater than 1.18.x. There no way to pass a label that allows users to specify a custom prefix for Hyperkube container source
  • If you’re running OpenStack Wallaby, you can add hyperkube_prefix label to specify a custom prefix for Hyperkube container source
#docker.io/rancher/
#docker.io/kubesphere/hyperkube
#Example:
--labels kube_tag=v1.21.1,hyperkube_prefix=docker.io/rancher/

#Checking available tags
sudo podman image search docker.io/rancher/hyperkube --list-tags --limit 1000

You can also pull, tag and upload to your own registry or docker.io

#Examples
## Search available tags for particular release
podman image search docker.io/rancher/hyperkube --list-tags --limit 1000 | grep 1.21

# Pull
podman pull docker.io/rancher/hyperkube:v1.21.1-rancher1

# Login to docker.io
$ podman login docker.io
Username: jmutai
Password:
Login Succeeded!

# Tag image
$ podman tag docker.io/rancher/hyperkube:v1.21.1-rancher1 docker.io/jmutai/hyperkube:v1.21.1

# Push image to registry
$ podman push docker.io/jmutai/hyperkube:v1.21.1

# I can the use the labels below in the template
--labels kube_tag=v1.21.1,hyperkube_prefix=docker.io/jmutai/

Step 3: Upgrade Your Kubernetes Cluster using new Template

Run the following command to trigger a rolling upgrade for Kubernetes version:

$ openstack coe cluster upgrade <cluster ID> <new cluster template ID>

Example:

$ openstack coe cluster upgrade k8s-cluster-02 k8s-cluster-template-v1.18.9
Request to upgrade cluster k8s-cluster-01 has been accepted.

Status should show update in progress.

$ openstack coe cluster list --column name  --column status --column health_status
+----------------+--------------------+---------------+
| name           | status             | health_status |
+----------------+--------------------+---------------+
| k8s-cluster-02 | UPDATE_IN_PROGRESS | UNHEALTHY     |
+----------------+--------------------+---------------+

Check the Cluster status once the upgrade is complete:

$ openstack coe cluster list --column name  --column status --column health_status
+----------------+-----------------+---------------+
| name           | status          | health_status |
+----------------+-----------------+---------------+
| k8s-cluster-01 | CREATE_COMPLETE | HEALTHY       |
| k8s-cluster-02 | UPDATE_COMPLETE | HEALTHY       |
+----------------+-----------------+---------------+

Let’s download kubeconfig and confirm the status:

$ mkdir k8s-cluster-02 
$ openstack coe cluster config --dir ./k8s-cluster-02 k8s-cluster-02 --force

Check Kubernetes version:

$ export KUBECONFIG=./k8s-cluster-02/config
$ kubectl version --short
Client Version: v1.21.1
Server Version: v1.18.9

Example of Upgrade to version 1.21.1:

$ kubectl version --short
Client Version: v1.21.1
Server Version: v1.21.1

$ kubectl get nodes
NAME                                   STATUS   ROLES    AGE   VERSION
k8s-cluster-02-soooe6tdv773-master-0   Ready    master   10h   v1.21.1
k8s-cluster-02-soooe6tdv773-node-0     Ready    <none>   10h   v1.21.1
k8s-cluster-02-soooe6tdv773-node-1     Ready    <none>   10h   v1.21.1
k8s-cluster-02-soooe6tdv773-node-2     Ready    <none>   10h   v1.21.1

This confirms the upgrade of Kubernetes Cluster on OpenStack Magnum was successful.

Books For Learning Kubernetes Administration:

Reference:

RELATED ARTICLES

Most Popular

Recent Comments