Kubernetes 1.26 is here and it's ⚡Electryfying⚡

Last 2022 release

Kubernetes 1.26 last release of 2022 was made last Friday 09.12

This version of Kubernetes is named Electrifying and it looks as exciting as it sounds!

On the one hand, the term was picked to draw attention to the problematic of the incredible amount of energy that is consumed by the systems orchestrated by Kubernetes. 

But Electrifying also refers to the characteristics of the K8s community and the increased automation that was used in this release.

This release comes with 37 enhancements

  • 11 graduating to Stable
  • 10 are graduating to Beta
  • 16 are entering Alpha

And there are 12 features that have been deprecated or removed.

Kubernetes 1.26 features long-awaited support for native cluster Federation and Kubernetes on Windows Server in IaaS environments.

Here are the highlights:

1. Support for Windows Privileged Containers [Stable] and Host Networking [Alpha]

Privileged containers have access and capabilities that are similar to the host processes running on the servers. 

Both the management of processes and the way privileged containers work is very different from the operating system standpoint in Linux and Windows.

HostProcess containers supports this feature in Windows nodes and is now graduating to Stable ensuring the same level of security and operational experience. It will be enabled by default and will allow access to host resources (including network resources) from privileged containers.

Additionally, there is a new alpha enhancement that supports host networking for Windows pods. The new alpha enhancement enables Windows’ functionality to make containers use the networking namespace of the nodes (from the Kubernetes side) increasing the parity between Linux and Windows containers.

2. CEL for admission control (Alpha)

Many were waiting for a practical implementation of the validation expression language from Kubernetes 1.25

This feature introduces a v1alpha1 API for validating admission policies, eliminating the need to manage webhooks and the drawbacks that come with it. It enables extensible admission control via Common Expression Language expressions, simplifying the setup for clusters since the rules for the admission controller as Kubernetes objects are defined.

3. Provision volumes from cross-namespace snapshots (Alpha)

Up until Kubernetes 1.25, The VolumeSnapshot feature had great benefits but also some limitations, like the inability to bind a PersistentVolumeClaim to VolumeSnapshots from other namespaces.


From now on, this enhancement will allow users to create a PersistentVolumeClaim from a VolumeSnapshots across namespaces. Having both objects in the same namespace, saving a database checkpoint when applications and services are in different namespaces will no longer be a problem

4. Kubernetes component health SLIs (Alpha)

Health Service Level Indicators (SLIs) has graduated to Alpha, which now allows you to configure SLI metrics for the Kubernetes components binaries. By enabling them, you eliminate the need for a Prometheus exporter since Kubernetes will expose the SLI metrics in the /metrics/slis endpoint. 

This improves your cluster’s stability by facilitating the creation of health dashboards and the configuration of PromQL alerts. Kubernetes monitoring is improved greatly thanks to this enhancement.

For each component, two metrics will be exposed:

  • A gauge – representing the current state of the healthcheck.
  • A counter – recording the cumulative counts observed for each healthcheck state.

With this information, you can check the overtime status for the Kubernetes internals, e.g.:



And create an alert for when something’s wrong, e.g.:

kubernetes_healthchecks_total{name=”etcd”,status=”error”,type=”readyz”} > 0

5. Dynamic resource allocation (Alpha)

Dynamic Resource Allocation is a new alpha feature that allows for request and shares resources between pods and containers inside a pod. 

It is a generalization of the persistent volumes API for generic resources. Resource scheduling, tracking and allocation are now the responsibility of third-party developers: it offers an alternative to the limited “countable” interface for requesting access to resources (e.g. nvidia.com/gpu: 2), providing an API more akin to that of persistent volumes. 

Under the hood, it uses the Container Device Interface (CDI) to do its device injection. This feature is blocked by the DynamicResourceAllocation feature gate.


The Kubernetes scheduler was expanded to not only take into account CPU and memory limits and requests but also take storage and other resources into account. Nevertheless, this is still limiting in some cases for instance for the initialization and cleanup of a device (FPGA) or for limiting the access to the resource (shared GPU).

Thanks to the new ResourceClaimTemplate and ResourceClass objects, and the new resourceClaims field inside Pods, this new API covers those scenarios of resource allocation and dynamic detection.

The scheduler can keep track of these resource claims, and only schedule Pods in those nodes with enough resources available, speeding up Kubernetes adoption in areas like scientific research or edge computing.

6. Deprecated/ removed APIs and features + alternatives

A few beta APIs and features have been deprecated or removed in Kubernetes 1.26, including:


  • CRI v1alpha2

  • flowcontrol.apiserver.
    for `FlowSchema` and `PriorityLevelConfiguration`
  • autoscaling/v2beta2 for `FlowSchema` and `PriorityLevelConfiguration` for HorizontalPodAutoscaler
  • kubectl –prune-whitelist
  • apiserver_request_slo_
  • kube-apiserver –master-service-namespace
  • Several unused options for kubectl run: –cascade, –filename, –force, –grace-period, –kustomize, –recursive, –timeout, –wait
  • CLI flag pod-eviction-timeout


  • v1 (containerd version 1.5 and older are not supported).
  • requires a migration to the v1beta2 API version.

  • requires a migration to the autoscaling/v2 API version.

  • –prune-allowlist

  • apiserver_request_sli_


  • Legacy authentication for Azure and Google Cloud is deprecated.
  • The userspace proxy mode.
    • Dynamic kubelet configuration.
  • in-tree OpenStack storage driver (cinder volume type) (you can use the CSI driver instead)
  • GlusterFS deprecated in 1.25 and removed 1.26
  • kube-proxy userspace, sig/network
  • CRI v1alpha2, sig/node