Kubernetes for a Docker Swarm afficionado

I’m a huge Docker Swarm fan. It’s nearly zero configuration and lack moving components made it the choice of powering SMOK. But now I have to learn Kubernetes. So let’s get down to business.

Control plane

Control plane of Kubernetes manages all of the stuff. It consists of parts:

  • etcd – which is the database, it’s the only stateful part of Kubernetes. It’s a CP system in the CP/AP theorem.
  • API – an API to which you can ask Kubernetes to do different things
  • Controller – a piece of software that runs child controllers, which watch for changes in the cluster and make things right (ie. reconcille differences). Each controller takes care only of it’s own things.
    • Deployment controller – controller that manages Deployments
    • StatefulSet controller
    • ReplicaSet
    • Note that this list is hardly exhaustive
  • Scheduler – which watches the API for what to run and picks best worker nodes. This is mostly a single point of failure.

Of course to ensure high availability, you need to run these all in replicas. The Docker Swarm cluster has a mode where some hosts are leaders (you can ask any leader to conduct any sort of modification), and some are worker nodes (and you can promote any node to leader and any to worker, but you better keep the number of leaders odd so as not to lose the quorum too fast). Same applies to etcd. The leaders must first agree on a state before applying it (probably using Paxos). The Kubernetes controller list is by no means exhaustive. As you can see, Kubernetes has a lot more moving parts than Docker Swarm. Kubernetes embraces the master-follower mode, with a single node leading the cluster with the followers ready to take over.

Worker nodes

Worker nodes do the heavy lifting:

  1. Watch the API server for new work assignments
  2. Execute work assignments
  3. Report back to control plane (via the API server)

A worker node consists of three elements, first of which is the kubelet. It’s the main Kubernetes agent and runs on every cluster node. Then, it needs a container runtime. Docker was used in the past, but nowadays CRI (Container Runtime Interface) seems to take the lead. containerd used to be a part of Docker, but was donated to CNCF. The last piece of puzzle is the kube-proxy. This is responsible for local cluster networking. It ensures that every node gets it’s own unique IP address, and implements routing and load balancing on the Pod networks.

The pod

The pod (as in a pod of whales) is the atomic unit of orchestration by Kubernetes. It’s a bunch of processes (or a single process) that share disk space, shared memory and an IP address. Basically they’re a single container, Docker-wise. This is perfect if your app requires a log scraper, metric submitter or a trace relay. All Pod components are always deployed together, on the same machine.

Networking in Kubernetes

There is a single DNS service that has a static IP address hardcoded in every Pod on the cluster, so service registration (and deregistration) is automatic. This works quite like Docker Swarm’s internal routing mesh (if your services happen to be in the same overlay network), if you call the service by it’s proper name, eg. stackname_servicename. Every Pod gets it’s unique IP address, hostname and it’s up to the services running inside the Pod to share them and make use of that. This is a major difference from Docker Swarm, where just dialing a hostname gets your a load-balanced replica, but you can still call them via their IP addresses.

Deploying stuff

While you can manually deploy Pods on a cluster, most often you will deploy when via the Deployment controller. First, you write your application, then you package it into a container image. Now, you’ll need to define the Pod. It can be either a single container, or a bunch of containers working together. This is a slight difference from Docker Swarm, where you just ran Docker containers, which do share IP addresses, disk and memory space, but you’ll need to launch them together by a script.

Kubernetes, very much like Docker Swarm is a declarative-based system. This means that you describe what your system should look like, and the orchestrator does it best to make it so.

In Kubernetes, you’ll most frequently deploy your services via a controller. You’ll describe in a simple YAML file what your deployment should look like, then you submit it to the API server (via kubectl CLI). Kubernetes will then locates the right controller, and stores the information in the database. The controller will then monitor the cluster and make sure that things are all right. Pods can talk to each other via it’s localhost IP address.

Pods are also the unit or orchestration and service scaling, much as a replica in Docker Swarm. In this regards they are very similar to replicas in Docker Swarm, as any configuration change will require a restart. But, where as in Docker Swarm you define a image, set of environment variables and configurations, and perhaps expose a bunch of ports, you don’t deploy Pods directly. Most frequently, you’ll use a Deployment, DaemonSet or a StatefulSet.

As in Docker Swarm, Pods (as replicas) are unreliable and can fail at any time. This forces you to design your system around this fact. Therefore, you’ll need a load balancer. Whereas in Docker Swarm it’s integrated, in Kubernetes you’ll need a Service object, that provides a reliable name, IP and a port number. A Service is a fully-fledged Kubernetes object, such as a Deployment.

Deploying pods

In Docker Swarm, you have only one way to deploy a replica – you’d need to define a service, mark it as replicated or global, and voila. In Kubernetes things are a bit harder: you can deploy a Pod directly from a Pod manifest (these are called static Pods), or deploy it via a controller. Static Pods lands on a single node, and is monitored by respective kubelet. If the node fails, you’re toast. Your Pod won’t automatically failover. In this case Docker Swarm would just pick another host and launch the replica there.

Therefore you’d most likely choose to deploy things via a controller, which runs on the control plane and can deploy things anywhere. Moreover, just like services in Docker Swarm, Pods are ephemeral (although you can mount a volume to a Docker Swarm service to store data there). Anyway, it’s best to store data in a dedicated database service.

I won’t discuss the multi-container pod because there’s little
difference between them, and processes running inside the
same container.

One thing that I miss in Docker Swarm is the capability of splitting it into smaller “subcomponents”. I’ve frequently ran into issues where a microservice placed in the same stack could call it’s friend core, but after putting them into a common network they needed to call themselves by their full names, namely smok5core_core. Kubernetes has this capability, and Docker Swarm stacks are notoriously finnicky with their DNS.

Deployment

There are two major components to deployments – the specification and the controller. The spec is a YAML file that defines the desired state of your application. You then submit that to the API, which picks the right controller for the job. Controllers are high availability.

Deployment object

In a deployment object you can specify the following:

  • service name (in the metadata section, it’s what gives the deployment it’s host name)
  • amount of replicas
  • pods and images that they base off
  • ports that they expose

Deployment is pretty similiar to Docker Swarm’s service, sans the automatic load balancing. But there’s one crucial part – while Docker Swarm would automatically load balance your traffic via it’s ingress port, you need to define an extra function of Kubernetes:

Service

Services map external TCP/UDP ports to one of the Pods orchestrated by your Deployment. Controllers and services find their Pods by right combination of their labels and selectors, so in previous versions of Kubernetes it was possible for a Deployment to take ownership of statically deployed Pod. I find it far more confusing than ingress and overlay networks in Docker Swarm.

Internal networking

In order to make a bunch of your Pods talk to another of your Pods, you’ll need a Service. There are many types of services, one of which is ClusterIP, which registers it’s hostname inside the cluster, making it accessible from everywhere. All of the pods are pre-programmed to use the cluster’s DNS service, which means that every Pod can convert a Service name to a bunch of ClusterIPs. You can naturally define a NetworkPolicy to prevent certain pods from talking to each other.

In Docker Swarm is simple – if two containers are in the same overlay network, they can talk.

External networking

In Docker Swarm exposing something to the world was easy – you’d just define an ingress port for some service and be done with it. Not so easy with Kubernetes.

You have to types of Services to worry about – NodePort, which enables external access via a dedicated port on every cluster node, just like the ingress network of Docker Swarm. Then, you’ll need to configure your NodePort Service to redirect particular port to a particular ClusterIP service. Since you know that it registers a hostname, an IP address and a port, you can redirect NodePort traffic there.

Then there’s the LoadBalancer service. It basically accepts connections on a single port and routes them to one of healthy pods deployed via a single Deployment.

This is a far cry from Docker Swarm, where you just declare an ingress port, and it’s available from every node from the cluster, providing (in my opinion) much stronger high availability guarantees.

Deploying multiple websites on the same server

Here things don’t differ that much. Whereas in Docker Swarm you’d expose a TCP 443 or 80 port, and reverse proxy to hostnames that joined the same overlay network (load balancing will be done for you automatically)

Data storage

One field where Kubernetes outperforms Docker Swarm is the persistent volume subsystems. I remember when I had to hack ZooKeeper to serve as a filesystem for shared Let’s Encrypt certficates, since I ran 3 replicas of ingress nginx (I wrote a huge patch for a zookeeper-fuse project, since Let’s Encrypt requires symlinks, and the original zookeeper-fuse had none of that).

Kubernetes can use cloud solutions (such as AWS or Azure File) or on-premises storage arrays providing iSCSI, FC and NFS volumes. Since you know how reliable NFS it, I think it’s a bad idea.

At the highest level of abstraction, Kubernetes defines Persistent Volumes, which can have classes (such as fast SSD or slow HDD). An application can request a Persistent Volume Claim, but I tend to disregards this aspect, since I tend to store my data in highly available databases, since that’s where their place is.

Configs and secrets

Things look extremely similar in both Kubernetes and Docker Swarm, so I won’t elaborate further. In both cases a change in configuration requires a pod/replica restart.

StatefulSets

One of the more interesting things in Kubernetes was StatefulSets. These guarantee:

  • predictable and persistent Pod names
  • predictable and persistent DNS hostnames
  • predictable and persistent volume bindings

It just presents one problem with reliability – it is possible to have two Pods using the same volume, which could result in data corruption. Yet another reason to keep your data in a database.

API comparison

Kubernetes prefers to use a HTTP REST API, while Docker Swarm likes more to use it’s CLI tools. The big advantage of Docker Swarm is that it’s welded to Docker, and uses the same API, including REST API.

Security

While implementing security, Kubernetes relies mostly on certificates (including self-signed), whereas Docker Swarm has none of that. Either your certificate matches, and you’ve got free reign of the cluster. However, Docker Swarm allows you to encrypt the data sent between different microservices inside it’s overlay networks, which is a feature I didn’t find in Kubernetes.

Regarding secrets and configurations, these work similar in Kubernetes and in Docker Swarm.

The final verdict

While I realize that Docker Swarm’s time is limited, I consider it to be a superior solution. With far fewer moving parts, and configuring via commands and not YAML files, it’s just so much easier to manage in a small cluster. However, if you have a huge Kubernetes cluster, split into smaller virtual Kuberneteses (called Namespaces) Kubernetes seems like a more reasonable choice. However, configuring it, using the right CNI seeems like not worth the effort.

Kubernetes has a huge array of authentication and authorization mechanisms based on RBAC. Docker Swarm has none of that – one exposed port grants you access to entirety of the cluster. Of course you can use TLS and certificates to secure that.

Docker Swarm is far easier to operate, but if you need your cluster to do really weird stuff, pick Kubernetes. It’s infinitely more configurable.

Summary

Docker Swarm conceptKubernetes concept
Service with an exposed portDeployment + ClusterIP + NodePort + LoadBalancer
Volumes + Docker Volume pluginsPersistentVolumes
Preventing services from talking to each other – just put them on different overlay networksDefining proper attributes + NetworkPolicy
Some nodes are managers, and any one of those can execute changes to cluster state.Single master, reelected in case it fails. Only this master is allowed to do so.
Built into Docker itselfRequires a number of moving parts
Published

By Piotr Maślanka

Programmer, certified first aider, entrepreneur, biotechnologist, expert witness, mentor, former PhD student. Your favourite renaissance man.

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.