Kubernetes began as a tool designed to simplify the lives of software developers. Now it’s poised to become omnipresent across the computing landscape.
“Essentially, all new applications are written to run on Kubernetes,” said Dan Ciruli, Nutanix’s vice president and general manager for cloud native technologies, in an interview with The Forecast.
“We'll be using it for everything eventually.”
Ciruli recalled Kubernetes’ evolution from a cloud-native development tool to a must-have platform for every kind of computing: cloud infrastructures, on-premises data centers, AI/ML apps and devices connected at the network edge. His perspective addresses some of the key computing issues that IT leaders and software developers will face in the next few years.
Ciruli has been closely following the rise of Kubernetes since its arrival a decade ago, which helps developers deploy microservices-based architectures. Typically, each microservice operates in its own container, which is essentially a portable operating system. “
"As a developer, it's so much easier to put your app in a container,” Ciruli said. “It makes testing and deployment really, really simple.”
Kubernetes and similar containerization tools help developers weave clusters of microservices together and keep them all running reliably. If one microservice crashes, for instance, the others keep operating. Kubernetes automatically identifies failures within clusters and cures whatever caused the problem. This is a distinct advantage over monolithic applications, where losing a single process can crash the entire app.
Cloud native development is growing increasingly mainstream. The Cloud Native Computing Foundation (CNCF) notes that 15.6 million developers worldwide build apps in the cloud. Precedence Research, meanwhile, expects the market for cloud native development to surge from $50.3 billion in 2025 to $172.5 billion in 2034.
Ciruli said that many enterprises are deploying cloud native applications on-premises across their own data centers. For many, that’s because certain data can’t be hosted in the cloud for compliance reasons. For example, he said government agencies and global corporations might want containerized apps in an air-gapped environment unplugged from the cloud.
“You should be able to run your application where you want to,” Ciruli said.
Putting cloud-native apps to work in on-prem environments is appealing in principle but complicated in practice.
“Using Kubernetes in the cloud is solved,” Ciruli said. “But using Kubernetes on-prem can still be difficult.”
He said a vast majority of IT teams rely on virtual machines to handle workloads and getting kubernetes to run on virtual machines can be challenging. These challenges motivated Nutanix, a pioneer of hyperconverged infrastructure that virtualizes compute, storage and networking, to build a single control plane that runs Kubernetes anywhere. The Nutanix Kubernetes Platform (NKP) simplifies microservices management in on-prem data centers, in the cloud and on edge devices.
“NKP Full Stack lets enterprises run any combination of virtualized and containerized applications,” Ciruli said.
He said Nutanix works with partners to provide additional capabilities, including Traekfik Labs, which adds a unified application intelligence layer that orchestrates application programming interfaces (APIs) to route some data traffic to VMs and other traffic to containers, depending on which way best suits an enterprise’s unique needs.
“It’s just making sure enterprises have everything they need in the world we'll be living in for the next 20 years, in which some applications or services are running in VMs and others are running in containers,” Ciruli said.
“We bring the application layer, Nutanix brings the rest of the infrastructure, and that's why it's a beautiful partnership,” said Sudeep Goswami, CEO of Traefik Labs, in an interview at KubeCon + CloudNativeCon North America 2025.
“Kubernetes is the de facto platform for AI infrastructure,” Ciruli told The Forecast. Multiple data points bear this out, because microservices and containers are ideal for a wide range of AI functions. Portworx, for instance, commissioned research suggesting 54% of AI/ML workloads run on Kubernetes. Sysdig's 2025 Kubernetes and Cloud-Native Security Report, meanwhile, noted that AI/ML workloads increased 500% in the past year.
For all its appeal to developers, Kubernetes has a steep learning curve because it touches on pretty much every aspect of coding, testing, deploying, securing and maintaining applications. Thus, finding seasoned Kubernetes practitioners is a global challenge.
Could AI address close these gaps?
“I have seen some pretty concrete examples of people using ML or AI to make running Kubernetes at scale easier,” Ciruli said.
Ciruli is skeptical, however, of claims that large language models (LLMs) will usher in massive job losses.
“I do believe that intelligent systems can be easier to run, meaning someone doesn't have to get five years of experience before they can run a Kubernetes cluster,” he said.
“Right now, Kubernetes is in the middle of taking over the edge,” Ciruli said. Assembly lines and distribution centers, for instance, are installing smart edge devices to monitor performance and take automation into realms like robotics.
Ciruli sees containers in Kubernetes becoming the default way workloads are deployed to a physical device.
“Containerization and Kubernetes have given a single model that lets developers deploy applications anywhere – from the cloud to the datacenter to the edge. We see it in vehicles on the road, on the sea, and in the air. It simplifies the lives of developers and makes operations much more consistent. And in the future, who knows where else we’ll see it.”
Get more Kubernetes and cloud native technology insights from these articles:
Tom Mangan is a contributing writer. He is a veteran B2B technology writer and editor, specializing in cloud computing and digital transformation. Contact him on his website or LinkedIn.
© 2026 Nutanix, Inc. All rights reserved. For additional information and important legal disclaimers, please go here.