KRO
Kubernetes is the undisputed standard for container orchestration, but its complexity can be overwhelming. Platform teams are therefore constantly looking for ways to hide that underlying complexity, so developers can fully concentrate on writing code. The Kubernetes Resource Orchestrator (KRO) project is one of the most recent and promising answers to this challenge.
What is KRO?
At its core, KRO is an open-source controller that runs directly inside your Kubernetes cluster and functions as a powerful abstraction layer. Instead of manually managing separate Deployments, Services, and Ingress rules, KRO allows you to bundle them into one logical, reusable unit: a ResourceGraphDefinition (RGD).
Think of an RGD as the “blueprint” of your application. Once you deploy this code into the cluster, KRO takes over and automates the work that would normally have to be done manually.
- Automatic creation of CRDs: KRO generates the required Custom Resource Definitions (CRDs) based on your definition.
- A smart in-cluster controller: A dedicated controller monitors the entire application and ensures all resources remain up to date.
- Intelligent dependency handling: KRO understands the relationships and knows exactly in which order resources must be created. This prevents annoying race conditions common in manual deployments.
- Flexible configuration: You can pass default values, dependent fields, and even values across different resources with CEL, making your manifests more powerful and reusable.
KRO uses CEL (Common Expression Language), which ensures more predictable and safer results than Helm’s Go templating language. This is intentional: while Go templating is powerful and can run arbitrary logic (which can also be risky), CEL is limited to evaluating simple, safe expressions. This makes the system fundamentally more secure and the outcomes of deployments highly predictable. This approach significantly lowers the barrier and makes operator implementation much simpler.
Future outlook
KRO sounds fantastic and has the potential to drastically reduce the complexity of application deployments. One important point: KRO is still under active development and should be considered an alpha product. While it’s not yet ready for critical production workloads, it does provide a clear glimpse into the simplification it could bring to the Kubernetes ecosystem.
Talos Linux
Another standout is the rising popularity of Talos Linux from Sidero Labs, a Linux distribution purpose-built for running Kubernetes. With Talos, Linux management is no longer required, letting you fully focus on managing Kubernetes.
A few key takeaways:
- Immutable: There’s no SSH daemon on the host, you can’t log in. You configure the node via a declarative YAML file. After that, the node is immutable. Every node in your cluster remains identical, preventing “configuration drift.” Of course, you can still access the node for reboots and other tasks. This marks the shift from “pets” to “cattle.”
- API-driven: Instead of logging into a server, you manage nodes via the Kubernetes API. This seamlessly fits with how you already work in Kubernetes. All admin and maintenance tasks, including updates and reboots, are performed through API calls. This simplifies automation and allows you to treat infrastructure as code.
- Safer and more predictable: Talos’s minimalist design gives it a very small attack surface. With fewer components and services, there are fewer vulnerabilities. No shell or SSH access means common attack vectors don’t exist. Updates are atomic, reducing error risks. The result: a more robust and predictable platform with far fewer chances of human error.
From talks and real-world examples, it’s clear that many companies use Talos as a solid foundation under their clusters. It eliminates node-specific configuration quirks. Instead, you get a clean, reliable OS that does exactly what’s needed and nothing more. By choosing Talos Linux, you remove Linux management complexity and can focus entirely on efficiently running Kubernetes workloads.
Gateway API
Next was the Traefik Labs talk on the Gateway API, a fascinating, future-focused topic.
Until now, Ingress has been the way to expose your applications in Kubernetes. It works, but quickly requires complicated annotations. The result: sprawling rules in a single file, leading to chaos and confusion.
The Gateway API is the successor and solves this once and for all. Here’s why it’s the future:
- Clear roles: Responsibilities are separated. Admins manage infrastructure (Gateway), developers manage routes to their apps (HTTPRoute). No more debates about who does what.
- Standardization: Vendor-specific annotations are gone. Gateway API defines standard rules for features like traffic splitting and header-based routing. Configurations become portable across vendors.
- Better control: Gateway API gives more detailed control over all traffic, both incoming and outgoing.
Traefik Labs fully embraces the Gateway API. We see it eliminating Ingress complexity and paving the way for a cleaner, safer, and more powerful cloud-native networking future. Compared to a CNI solution like Cilium, which integrates the Gateway API into a complete networking platform, Traefik focuses on what it does best: being a specialized, intelligent, and user-friendly reverse proxy.
Common Expression Language (CEL)
A hot topic after the recent introduction of ValidatingAdmissionPolicies and MutatingAdmissionPolicies in Kubernetes.
These new, native features run directly in the API server, eliminating the risks of external webhooks. Policies can now be managed declaratively with the Common Expression Language (CEL). Examples include blocking :latest images or enforcing resource limits and labels. Security and efficiency are significantly improved.
A more advanced example: setting a limit on replicas as shown below.

Although this is a major step forward, tools like Kyverno, OPA Gatekeeper, and Kubewarden remain essential for complex scenarios, thanks to their richer functionality and abstractions. Still, it’s clear: for baseline policy management, Kubernetes itself will soon be enough.
CEL ensures a future for Kubernetes security that is lighter, safer, and more integrated than ever before.
See you next year!
ContainerDays was once again a fantastic reminder that the Kubernetes community never stands still. From making our clusters fundamentally more secure with Talos Linux and CEL to simplifying deployments with KRO, we’re ready for the next phase of Cloud-Native engineering. These insights form the foundation for our upcoming projects, ensuring the platforms we build are future-proof and of the highest quality. It was an incredible few days. See you next year!