Using Kubernetes, we can handle a cluster of servers as one big logical server that runs our containers. We declare a desired state for the Kubernetes cluster, and it ensures that the actual state is the same as the desired state at all times, provided that enough hardware resources are available in the cluster.
This article is an excerpt from the book, Microservices with Spring Boot and Spring Cloud, Second Edition by Magnus Larsson – A step-by-step guide to creating and deploying production-quality microservices-based applications.
We will use Minikube to create a local single-node cluster.
On macOS, we will use HyperKit (https://minikube.sigs.k8s.io/docs/drivers/hyperkit/) to run a lightweight Linux VM. HyperKit uses the macOS built-in Hypervisor framework and is installed by Docker Desktop for Mac, so we don’t need to install it separately.
On Windows, we will run Minikube in a Linux server running on WSL2 (Windows Subsystem for Linux, v2). The easiest way to run Minikube in WSL2 is to run Minikube as a Docker container.
Docker and its containers are already running in a separate WSL2 instance; see the Installing Docker Desktop for Windows section in Chapter 22, Installation Instructions for Microsoft Windows with WSL 2 and Ubuntu.
One drawback of running Minikube as a container on Docker is that ports exposed by Minikube are only accessible in the host that runs Docker. To make the ports available to Docker clients, for example the Linux server we will use on WSL2, we can specify port mappings when creating the Minikube cluster.
Before creating the Kubernetes cluster, we need to learn a bit about Minikube profiles, the Kubernetes CLI tool known as kubectl, and its use of contexts.
Working with Minikube profiles
In order to run multiple Kubernetes clusters locally, Minikube comes with the concept of profiles. For example, if you want to work with multiple versions of Kubernetes, you can create multiple Kubernetes clusters using Minikube. Each cluster will be assigned a separate Minikube profile. Most of the Minikube commands accept a –profile flag (or -p for short) that can be used to specify which of the Kubernetes clusters the command will be applied to. If you plan to work with one specific profile for a while, a more convenient alternative exists, where you specify the current profile with the following command:
minikube profile my-profile
This command will set the my-profile profile as the current profile.
To get the current profile, run the following command:
minikube config get profile
If no profile is specified, either using the minikube profile command or the –profile switch, a default profile named minikube will be used.
Information regarding existing profiles can be found with the command minikube profile list.
Working with the Kubernetes CLI, kubectl
kubectl is the Kubernetes CLI tool. Once a cluster has been set up, this is usually the only tool you need to manage the cluster!
For managing the API objects, as we described earlier in this chapter, the kubectl apply command is the only command you need to know about. It is a declarative command; that is, as an operator, we ask Kubernetes to apply the object definition we give to the command. It is then up to Kubernetes to figure out what actually needs to be done.
Another example of a declarative command that’s hopefully familiar to many readers of this book is a SQL SELECT statement, which can join information from several database tables. We only declare the expected result in the SQL query, and it is up to the database query optimizer to figure out in what order the tables should be accessed and what indexes to use to retrieve the data in the most efficient way.
In some cases, imperative statements that explicitly tell Kubernetes what to do are preferred. One example is the kubectl delete command, where we explicitly tell Kubernetes to delete some API objects. Creating a namespace object can also be conveniently done with an explicit kubectl create namespace command.
Repetitive usage of the imperative statements will make them fail, for example, deleting the same API object twice using kubectl delete or creating the same namespace twice using kubectl create. A declarative command, that is, using kubectl apply, will not fail on repetitive usage—it will simply state that there is no change and exit without taking any action.
Some commonly used commands for retrieving information about a Kubernetes cluster are as follows:
- kubectl get shows information about the specified API object
- kubectl describe gives more detail about the specified API object
- kubectl logs displays log output from containers
We will see a lot of examples of these and other kubectl commands in this and the upcoming chapters!
If in doubt about how to use the kubectl tool, the kubectl help and kubectl <command> –help commands are always available and provide very useful information. Another helpful command is kubectl explain, which can be used to show what fields are available when declaring a Kubernetes object. For example, run the following command if you need to look up the fields available to describe a container in the template of a Deployment object:
kubectl explain deployment.spec.template.spec.containers
Working with kubectl contexts
To be able to work with more than one Kubernetes cluster, using either Minikube locally or Kubernetes clusters set up on on-premises servers or in the cloud, kubectl comes with the concept of contexts. A context is a combination of the following:
- A Kubernetes cluster
- Authentication information for a user
- A default namespace
By default, contexts are saved in the ~/.kube/config file, but the file can be changed using the KUBECONFIG environment variable. In this book, we will use the default location, so we will unset KUBECONFIG using the unset KUBECONFIG command.
When a Kubernetes cluster is created in Minikube, a context is created with the same name as the Minikube profile and is then set as the current context. So, kubectl commands that are issued after the cluster is created in Minikube will be sent to that cluster.
To list the available contexts, run the following command:
kubectl config get-contexts
The following is a sample response:
Figure 15.3: List of kubectl contexts
The wildcard, *, in the first column marks the current context.
You will only see the handson-spring-boot-cloud context in the preceding response once the cluster has been created, the process for which we will describe shortly.
If you want to switch the current context to another context, that is, work with another Kubernetes cluster, run the following command:
kubectl config use-context my-cluster
In this example, the current context will be changed to my-cluster.
To update a context, for example, switching the default namespace used by kubectl, use the kubectl config set-context command.
For example, to change the default namespace of the current context to my-namespace, use the following command:
kubectl config set-context $(kubectl config current-context) --namespace my-namespace
In this command, kubectl config current-context is used to get the name of the current context.
Creating a Kubernetes cluster
To create a Kubernetes cluster using Minikube, we need to run a few commands:
- Unset the KUBECONFIG environment variable to ensure that the kubectl context is created in the default config file, ~/.kube/config.
- Create the cluster using the minikube start command, where we can also specify what version of Kubernetes to use and the amount of hardware resources we want to allocate to the cluster:
- To be able to complete the examples in the remaining chapters of this book, allocate 10 GB of memory, that is, 10,240 MB, to the cluster. The samples should also work if only 6 GB (6,144 MB) are allocated to the Minikube cluster, albeit more slowly.
- Allocate the number of CPU cores and disk space you find suitable; 4 CPU cores and 30 GB of disk space are used in the example below.
- Finally, specify what version of Kubernetes will be used. In this book, we will use v1.20.5.
- Specify the Minikube profile to be used for the coming minikube commands. We will use handson-spring-boot-cloud as the profile name.
- After the cluster has been created, we will use the add-on manager in Minikube to enable an Ingress controller and a metrics server that comes out of the box with Minikube. The Ingress controller and the metrics server will be used in the next chapters.
Run the following commands to create the Kubernetes cluster on macOS:
unset KUBECONFIG minikube start \ --profile=handson-spring-boot-cloud \ --memory=10240 \ --cpus=4 \ --disk-size=30g \ --kubernetes-version=v1.20.5 \ --driver=hyperkit minikube profile handson-spring-boot-cloud minikube addons enable ingress minikube addons enable metrics-server
In WSL2 on Windows, we need to replace the HyperKit driver with the Docker driver and specify the ports we will need access to in the coming chapters. Run the following commands in WSL2:
unset KUBECONFIG minikube start \ --profile=handson-spring-boot-cloud \ --memory=10240 \ --cpus=4 \ --disk-size=30g \ --kubernetes-version=v1.20.5 \ --driver=docker \ --ports=8080:80 --ports=8443:443 \ --ports=30080:30080 --ports=30443:30443 minikube profile handson-spring-boot-cloud minikube addons enable ingress minikube addons enable metrics-server
The ports 8080 and 8443 will be used by the Ingress controller and the ports 30080 and 30443 will be used by Services of type NodePort.
After the preceding commands complete, you should be able to communicate with the cluster. Try the kubectl get nodes command. It should respond with something that looks similar to the following:
Figure 15.4: List of nodes in the Kubernetes cluster
Once created, the cluster will initialize itself in the background, starting up a number of system Pods in the kube-system namespace. We can monitor its progress by issuing the following command:
kubectl get pods --namespace=kube-system
Once the startup is complete, the preceding command should report the status for all Pods as Running and the READY count should be 1/1, meaning that a single container in each Pod is up and running:
Figure 15.5: List of running system Pods
Note that two Pods are reported as Completed, and not Running. They are Pods created by Job objects, used to execute a container a fixed number of times like a batch job. Run the command kubectl get jobs –namespace=kube-system to reveal the two Job objects.
We are now ready for some action!
Summary
In this article, we have tried out Kubernetes by creating a local single-node cluster using Minikube. The Minikube cluster runs on macOS using HyperKit and runs as a Docker container in WSL2 on Windows.
About the Author
Magnus Larsson has been in the IT industry for more than 30 years, working as a consultant for large companies in Sweden such as Volvo, Ericsson, and AstraZeneca. He has seen a lot of different communication technologies come and go over the years, such as RPC, CORBA, SOAP, and REST. In the past, he struggled with the challenges associated with distributed systems as there was no substantial help from the software available at that time. This has, however, changed dramatically over the last few years with the introduction of open-source projects such as Spring Cloud, Netflix OSS, Docker, and Kubernetes. Over the last five years, Magnus has been helping customers use these new software technologies and has also done several presentations and blog posts on the subject.
Editor’s note:
At our upcoming event this November 16th-18th in San Francisco, ODSC West 2021 will feature a plethora of talks, workshops, and training sessions on machine learning and machine learning research. You can register now for 30% off all ticket types before the discount drops to 20% in a few weeks. Some highlighted sessions on machine learning include:
- Towards More Energy-Efficient Neural Networks? Use Your Brain!: Olaf de Leeuw | Data Scientist | Dataworkz
- Practical MLOps: Automation Journey: Evgenii Vinogradov, PhD | Head of DHW Development | YooMoney
- Applications of Modern Survival Modeling with Python: Brian Kent, PhD | Data Scientist | Founder The Crosstab Kite
- Using Change Detection Algorithms for Detecting Anomalous Behavior in Large Systems: Veena Mendiratta, PhD | Adjunct Faculty, Network Reliability and Analytics Researcher | Northwestern University
Sessions on MLOps:
- Tuning Hyperparameters with Reproducible Experiments: Milecia McGregor | Senior Software Engineer | Iterative
- MLOps… From Model to Production: Filipa Peleja, PhD | Lead Data Scientist | Levi Strauss & Co
- Operationalization of Models Developed and Deployed in Heterogeneous Platforms: Sourav Mazumder | Data Scientist, Thought Leader, AI & ML Operationalization Leader | IBM
- Develop and Deploy a Machine Learning Pipeline in 45 Minutes with Ploomber: Eduardo Blancas | Data Scientist | Fidelity Investments
Sessions on Deep Learning:
- GANs: Theory and Practice, Image Synthesis With GANs Using TensorFlow: Ajay Baranwal | Center Director | Center for Deep Learning in Electronic Manufacturing, Inc
- Machine Learning With Graphs: Going Beyond Tabular Data: Dr. Clair J. Sullivan | Data Science Advocate | Neo4j
- Deep Dive into Reinforcement Learning with PPO using TF-Agents & TensorFlow 2.0: Oliver Zeigermann | Software Developer | embarc Software Consulting GmbH
- Get Started with Time-Series Forecasting using the Google Cloud AI Platform: Karl Weinmeister | Developer Relations Engineering Manager | Google