Top 5 Kubernetes Trends to Watch at KubeCon Amsterdam 2026

By Paul Zerdilas Herrera, Cloud Native Architect, Nutanix

Paul Zerdilas Herrera headshot

Paul Zerdilas Herrera is a Cloud Native Architect at Nutanix, helping organisations design and operate Kubernetes at scale. After several years working in cloud infrastructure, he continues to explore Kubernetes one day at a time. He values curiosity, clarity, and connection, and enjoys sharing practical lessons from the field with the wider cloud native community.

Connect with Paul on LinkedIn

KubeCon Europe is just around the corner, bringing practitioners, engineering leaders, and many of the voices together in Amsterdam, influencing how cloud-native technologies are run in production today. Kubernetes is no longer a niche technology limited to a handful of early adopters. It has become a core part of enterprise IT, supporting everything from modern application platforms to data-intensive and AI-driven workloads.

KubeCon has always been a good signal of where Kubernetes is heading, but in recent years, it has also become a reflection of where organisations are spending most of their time and energy. The conversations in hallways, talks, and booths tend to circle around the same challenges: running Kubernetes at scale, keeping costs under control, meeting regulatory requirements, and supporting new workloads without adding more complexity.

Looking ahead to KubeCon Amsterdam 2026, five trends stand out. Not because they are entirely new, but because they have moved from theory into day-to-day operational reality. Together, they highlight how Kubernetes platforms are evolving to meet real enterprise needs, and why the way teams design and operate their platforms today matters more than ever.

Nutanix will be part of these conversations at KubeCon Amsterdam 2026, contributing to the community through talks and sessions focused on real-world platform and operational challenges.

You can find Nutanix sessions at the event here.

Data Sovereignty and Regulatory Compliance

Today, more than ever in Europe, digital sovereignty and regulatory compliance are no longer theoretical concepts but strategic priorities for enterprises. With the AI Act entering into force and regulations like GDPR and the EU Data Act tightening, compliance can no longer be treated as an afterthought. It has very real consequences for how platforms are designed, where data lives, and how workloads are actually operated.

For platform and Kubernetes teams, this increasingly means making deliberate platform choices. Many teams are standardising on open, portable, upstream Kubernetes platforms to avoid vendor lock-in and retain control as requirements evolve. Data residency is enforced at the cluster level, while policy-as-code is used to define and automatically enforce what can be deployed, where, and under which conditions.

Compliance is also moving earlier in the delivery process. Instead of relying on manual reviews or periodic audits, teams are embedding guardrails directly into their pipelines. Policies block non-compliant configurations before they ever reach production, GitOps workflows ensure every change is traceable and auditable, and continuous monitoring makes it easier to spot and correct violations early. The result is a Kubernetes platform that can support sovereign operations across cloud, on-premises, and edge environments without slowing teams down.

Cost Management and Kubernetes FinOps

Kubernetes makes it easy to scale workloads. Unfortunately, it also makes it easy to scale cloud bills. The same flexibility that makes containers so powerful—the ability to scale dynamically, spin up resources in seconds, and share underlying infrastructure—has broken the old ways of managing costs. In environments where clusters are spread across clouds, regions, and on-premises locations, it’s often hard to answer a simple question: where is the money actually going?

And this isn’t just a perception. The State of FinOps Report 2025 shows that workload optimisation and waste reduction are now the top priorities for FinOps practitioners1. As efficiency moves higher on the agenda, cost management is no longer something Finance deals with at the end of the month.

In practice, platform teams are trying to apply FinOps principles directly to Kubernetes. They need real-time visibility into costs, broken down by cluster, namespace, and service, along with better alignment between actual usage and resource requests. Automated rightsizing, smarter workload placement, and anomaly detection all help make costs more predictable and easier to act on for both engineering and finance.

As Kubernetes estates grow more distributed, unified management and predictable pricing models are becoming increasingly important to reduce both operational overhead and cost complexity across hybrid and multicloud environments.

Platform Engineering, GitOps, and Internal Developer Platforms

Speed matters. In today’s fast-paced world, how quickly an organisation can innovate is often the difference between leading the market and playing catch-up. Developers sit at the centre of that innovation, but as platforms grow more complex, too much time is still spent navigating operational friction instead of delivering new capabilities.

Platform engineering is one way teams are addressing this. The goal is to remove unnecessary friction and give developers a shared, reliable foundation to build on. Platform teams focus on simplifying how infrastructure is consumed, creating clear and repeatable workflows that help teams move faster without sacrificing stability.

Internal developer platforms provide the paved path. They offer predictable ways to build, ship, operate, and observe software across environments. GitOps workflows play a key role here, enabling declarative and auditable deployments where changes are reviewed, tracked, and applied consistently. This allows teams to move faster while still maintaining security and operational consistency at scale.

As platform engineering matures, more attention is being paid to the foundations that make it sustainable. Consistent and secure infrastructure across environments helps platform teams support these internal platforms over time, while still giving developers the freedom to choose the tools and workflows that work best for them.

Enterprise Adoption of Kubernetes, Driven by AI

AI ambition is everywhere. Kubernetes has become the platform many teams rely on to actually run AI in production. According to the CNCF Annual Cloud Native Survey, Kubernetes is now the de facto operating system for AI, with production usage reaching 82% in 2025.2

That adoption isn’t accidental. AI workloads are demanding by nature: bursty training jobs, shared GPU infrastructure, and inference services that need to scale reliably under load. Kubernetes gives teams a practical way to handle both experimentation and production, bringing structure and control to environments where resource contention and unpredictability are the norm.

Running AI in production raises the bar for operations. GPUs, data pipelines, and inference workloads add new pressure on scheduling, observability, and security. AI is not just another workload on the cluster; it changes how platforms are designed and operated day to day. This is also why higher-level frameworks such as Kubeflow are gaining traction, providing standard ways to define, run, and scale AI pipelines on Kubernetes without reinventing everything from scratch.

As AI becomes part of core enterprise systems, expectations on the underlying platform rise with it. Teams are looking for Kubernetes platforms designed for GPU-intensive workloads, capable of running the same AI pipelines consistently across on-premises and cloud environments, and ready for production AI from day one.

Multicloud Deployments and Edge Expansion

Running Kubernetes across multiple environments is now the default, not the exception. Clusters span different clouds, regions, and increasingly edge locations. Some run centrally, others closer to users and data, often built at different times and managed by different teams. While this brings flexibility, it also changes how platforms actually need to be run.

The challenge shows up in day-to-day operations, not in architecture diagrams. A deployment behaves slightly differently in one environment. A policy enforced in one cluster is missing in another. Identity, networking, and lifecycle management vary just enough between providers to create friction. Over time, teams spend more effort duplicating work and reconciling differences than improving the platform itself.

There are good reasons why teams accept this complexity. Multicloud reduces dependency on a single provider, supports regulatory and data residency requirements, and enables access to specialised resources such as GPUs. Edge use cases accelerate this further, particularly in retail, manufacturing, healthcare, and telecom environments where latency and locality matter.

At this point, the challenge is no longer where Kubernetes runs, but how it is operated. Managing each environment in isolation does not scale. What teams are really looking for is a consistent operational model, with shared policies and lifecycle management applied across all clusters, regardless of location. Without that foundation, multicloud flexibility quickly turns into operational drag.

See Nutanix at KubeCon Amsterdam 2026

These five trends highlight how far Kubernetes has evolved, from a powerful container orchestrator into a foundational enterprise platform. What connects them is a shared need across organisations: simpler operations, stronger governance, and infrastructure that can truly scale across clouds, data centres, and edge environments.

As AI, compliance requirements, and multicloud strategies reshape the way platforms are built, one thing becomes clear: getting Kubernetes right starts with getting the foundation right. Teams need consistent, secure, and compliant infrastructure that works wherever they need it, without adding unnecessary complexity.

Whether you’re modernising your platforms, preparing for AI workloads, or looking to unify Kubernetes operations across environments, now is the time to take a fresh look at your Kubernetes strategy.

If you'd like to continue the conversation, meet the Nutanix team at KubeCon Amsterdam 2026 (Booth 905) or book a session with us to discuss your Kubernetes journey.

1. https://data.finops.org/

2. https://www.cncf.io/announcements/2026/01/20/kubernetes-established-as-the-de-facto-operating-system-for-ai-as-production-use-hits-82-in-2025-cncf-annual-cloud-native-survey/

©2026 Nutanix, Inc. All rights reserved. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names mentioned herein are for identification purposes only and may be the trademarks of their respective holder(s).