Saturday, November 16, 2024
Google search engine
HomeGuest BlogsDocker vs CRI-O vs Containerd

Docker vs CRI-O vs Containerd

Introduction

The sphere of containers is like a labyrinthine forest cover. The many branching tunnels and jargon on top of jargon it is characterized with can sooner or later lead you to a familiar destination that we have all been to. The chilly destination of confusion. This post will do its best to try to clear the thicket and let in some sunshine to light a few paths here and there so that you can continue with your container journey with some smile added on your face. We are going to look at the differences that exist among Docker, CRI-O, and containerd.
After doing a bit of reading, here is some information about Docker, CRI-O, and containerd.

“Wherever you are, and whatever you do, be in love.”
-Rumi

Docker

Before version 1.11, the implementation of Docker was a monolithic daemon. The monolith did everything as one package such as downloading container images, launching container processes, exposing a remote API, and acting as a log collection daemon, all in a centralized process running as root (Source: Coreos).

Such a centralized architecture has some benefits when it comes to deployment but unearths other fundamental problems. An example is that it does not follow best practices for Unix process and privilege separation. Moreover, the monolithic implementation makes Docker difficult to properly integrate with Linux init systems such as upstart and systemd (Source: Coreos).

This led to the splitting of Docker into different parts as quoted in the below opening paragraph when Docker 1.11 was launched.

We are excited to introduce Docker Engine 1.11, our first release built on runC ™ and containerd ™. With this release, Docker is the first to ship a runtime based on OCI technology, demonstrating the progress the team has made since donating our industry-standard container format and runtime under the Linux Foundation in June of 2015” (Source Docker).

According to Docker, splitting it up into focused independent tools mean more focused maintainers and ultimately better quality software.

You can get more information and details about OCI technology here.

The figure below illustrates the new architecture of Docker 1.11 after building it on runC ™ and containerd ™.

Docker1.11
(Source Docker)

Since then, containerd now handles the execution of containers which was previously done by docker daemon itself. This is the exact flow:

A user runs commands from Docker-CLI > Docker-CLI talks to the Docker daemon(dockerd) > Docker daemon(dockerd) listens for requests and manages the lifecycle of the container via containerd which it contacts > containerd takes the request and starts a container through runC and does all the container life-cylces within the host.

Note: runc, in brief, is a CLI tool for spawning and running containers according to the OCI specification.

Containerd

Now let us shift our focus and get to understand what containerd is all about. From a high level stand point, containerd is a daemon that controls runC. From containerd website, “containerd manages the complete container lifecycle of its host system, from image transfer and storage to container execution and supervision to low-level storage to network attachments and beyond“. It is also known as a container engine.

containerd helps abstract away syscalls or Operating-System specific functionality to run containers on Linux, Windows or any other Operating System. It provides a client layer that any other platform such as Docker or Kubernetes can build on top of without ever caring to sneak into the kernel level details. It should be noted that in Kubernetes containerd can be used as CRI runtime. We will tackle CRI in a jiffy. These are what you get by leveraging on containerd:

  • You get push and pull functionality
  • Image management APIs to create, execute, and manage containers and their tasks
  • Snapshot management.
  • You get all of that without ever scratching your head with the underlying OS details (Source: Docker)

CRI-O

And now onto CRI-O. Before we delve into CRI-O, lets immerse our heads briefly into the pool of CRI, that is Container Runtime Interface. CRI is a plugin interface that gives kubelet the ability to use different OCI-compliant container runtimes (such as runc – the most widely used but there are others such as crun, railcar, and katacontainers), without needing to recompile Kubernetes. Kubelet, as you know, is a cluster node agent used to create pods and start containers (Source: RedHat).

To understand the need for CRI, it will be wise to know the pain points that Kubernetes experienced prior to it. Kubernetes was previously bound to a specific container runtime which spawned a lot of maintenance overhead for the upstream Kubernetes community. Moreover, vendors building solutions on top of the Kubernetes experienced the same overhead. This necessitated the development of CRI to make Kubernetes container runtime-agnostic by decoupling it from various runtimes.

Since the plugin is already built, CRI-O project was begun to provide a lightweight runtime specifically for Kubernetes. CRI-O makes it possible for Kubernetes to run containers directly without much tools and code.

Components of CRI-O

The following are the components of CRI-O

  • OCI compatible runtime – Default is runC, other OCI compliant are supported as well e.g Kata Containers.
  • containers/storage – Library used for managing layers and creating root file-systems for the containers in a pod.
  • containers/image – Library is used for pulling images from registries.
  • networking (CNI) – Used for setting up networking for the pods. Flannel, Weave and OpenShift-SDN CNI plugins have been tested.
  • container monitoring (conmon) – Utility within CRI-O that is used to monitor the containers.
  • security is provided by several core Linux capabilities

Source: CRI-O

The screenshot below illustrates the whole Kubernetes and CRI-O process. From it, it can also be observed that CRI-O lies under the category of a container engine.

Kubernetes

Conclusion

Even though there are many products in the container sphere, a closer look at most of the solutions provided efforts to fix an issue here or there. Docker, CRI-O, and containerd all have their own spaces and can all benefit Kubernetes in launching and maintaining pods. What can be observed is that the three depend on runC at the lowest level to handle the running of containers. We hope the post was informative as beneficial as you had wished.

Before you leave, take a look at related articles and guides below:

Install and Use Helm 2 on Kubernetes Cluster

Easily Manage Multiple Kubernetes Clusters with kubectl & kubectx

Configure Kubernetes Dynamic Volume Provisioning With Heketi & GlusterFS

How To Deploy Lightweight Kubernetes Cluster in 5 minutes with K3s

Setup Kubernetes / OpenShift Dynamic Persistent Volume Provisioning with GlusterFS and Heketi

Managing Docker Containers with Docker Compose

How To Export and Import Docker Images / Containers

Install Harbor Docker Image Registry on CentOS / Debian / Ubuntu

How To run Local Kubernetes Cluster in Docker Containers

RELATED ARTICLES

Most Popular

Recent Comments