Is K3s = K8s but better?
What is k3s?
K3s is a CNCF sandbox project that delivers a lightweight yet powerful certified Kubernetes distribution. It is a certified Kubernetes distribution (not a fork!) for IoT and Edge Computing.
Originally a project from Rancher Labs, it was donated to the CNCF in 2020. SUSE is now a major contributor to the project.
The name K3s alludes to it being a lightweight (half as big) version of K8s. Kubernetes is a 10-letter word stylized as K8s with 8 letters between K and s.; and since K3s’ installation was planned to be half the size of K8s in terms of memory footprint, its name became a 5-letter word stylized as K3s. However, there is no long form of K3s and no official pronunciation.
When used with SUSE Rancher, K3s is easy to install and suitable for running production workloads across resource-restrained, remote locations like IoT devices. Its lightweight but Highly-Available Kubernetes distribution can be easily managed within the Rancher orchestration platform.
K3s is packaged as a single < 40MB binary that reduces the dependencies and steps needed to install, run and auto-update a production Kubernetes cluster. And is optimized for ARM, working great from something as small as Raspberry Pi to a huge AWS a1.4x large 32gb server.
K3s vs K8s
K3s can completely implement the Kubernetes API despite being designed to be a tiny single binary. The main difference is that many extra drivers (e.g. In-tree storage drivers, In-tree cloud provider) were removed and were replaced with add-ons.
K3s is a fully conformant production-ready Kubernetes distribution with the following changes:
- It is packaged as a single binary.
- The memory footprint to run it is smaller than standard K8s.
- The binary, which contains all the non-containerized components needed to run a cluster, is smaller.
- It adds support for sqlite3 as the default storage backend. Etcd3, MySQL, and Postgres are also supported.
- It wraps Kubernetes and other components in a single, simple launcher.
- It is secure by default with reasonable defaults for lightweight environments.
- It has minimal OS requirements (just a sane kernel and cgroup mounts needed).
- It eliminates the need to expose a port on Kubernetes worker nodes for the kubelet API by exposing this API to the Kubernetes control plane nodes over a websocket tunnel.
Advantages of K3s over K8s
- Lower resource requirements.
- You can run a cluster on anything from 512MB of RAM machines upwards.
- Pods can run on the master, as well as nodes by default.
- Installation takes a fraction of the time compared to a regular K8s cluster.
- It can deploy applications faster and spin up clusters more quickly.
- Flexible to run on devices with low specs (e.g., IoT and edge use cases).
- Adding and removing nodes is easy with one-line commands. Setting up a single node or multi-node cluster is simpler.
K3s bundles the following technologies together into a single cohesive distribution:
- Containerd & runc
- Flannel for CNI
- Metrics Server
- Traefik for ingress
- Klipper-lb as an embedded service load balancer provider
- Kube-router netpol controller for network policy
- Helm-controller to allow for CRD-driven deployment of helm manifests
- Kine as a datastore shim that allows etcd to be replaced with other databases
- Local-path-provisioner for provisioning volumes using local storage
- Host utilities such as iptables/nftables, ebtables, ethtool, & socat
These technologies can, of course, be disabled or swapped out for technologies of your choice.
Additionally, K3s simplifies Kubernetes operations by maintaining functionality for:
- Managing the TLS certificates of Kubernetes components
- Managing the connection between worker and server nodes
- Auto-deploying Kubernetes resources from local manifests in real-time as they are changed.
How it works
Similar to a master node and a worker node in K8s, K3s architecture consists of a server and an agent.
- server node: bare-metal or virtual machine that runs the k3s server part
- agent (worker) node: VM or server that runs the k3s agent services
In K3s, Kube Proxy on the Agent gets connected with API-Server with the help of Tunnel Proxy. (Usually, Kube Proxy uses several ports to get connected with API-Server). Tunnel Proxy creates a unidirectional connection to reach out to API-Server. The link is established first, and bidirectional communication is established after that. This allows for a more secure connection by using a single port for all communication.
SQLite is the replacement of Kubernetes ETCD, which allows K3s to remove dependency from ETCD and run a single-node cluster with SQLite. In the case of a multi-node cluster, K3s can use external databases.
Flannel is located in the agent, and works via container network interface (CNI) for cluster networking. It has a connection with Kubelet, which again connects with Containerd. Containerd establishes connections with multiple pods.
As opposed to K8s, where each component runs as a single process, in K3s all of the components run together as a single process making it really lightweight.
K3s components are similar to those of K8s except for SQLite, Tunnel Proxy, and Flannel (which are new additions).
K3s architecture allows a single node cluster to be able to spin up in just as little as 60-90 seconds, since both the server and the agent can run as a single process in a single node.
K3s Use cases
K3s is often confused with a fork, but it is a fully conformant distribution of Kubernetes and it can be used for a variety of cases besides just Edge.
Here are some examples:
- Learning: K3s allows you to learn about clusters and Kubernetes features without having to go through the heavy set-up process of K8s.
- Run a lightweight Kubernetes development environment: You can do this with no need of deep knowledge of Kubernetes internals.
- Run Kubernetes on Raspberry Pi clusters, IoT and ARM-based devices: running and managing with tight resource constraints becomes easier with K3s.
- CI/CD pipelines: You can build Continuous Deployment (CD) pipelines in a GitOps paradigm. With a lightweight k3s cluster and Argo you can redeploy an application whenever declarative configurations are updated.