The rise of AI is driving organizations to rethink and recalibrate how their workforces and IT resources will move forward. Many have moved beyond experimentation to implementing AI solutions. Going forward quickly, smartly and safely is an onerous task.
In a series of conversations with Nutanix President and CEO Rajiv Ramaswami and Nutanix Chief AI Officer Debojyoti “Debo” Dutta explored some of the biggest challenges enterprise IT teams face today as the world moves beyond GenAI to what’s next and what will follow.
Their conversations touched on the growing divide between expensive U.S. foundational models and cost-efficient Chinese open source alternatives, the security risks posed by autonomous AI agents, and challenges that demand an entirely new infrastructure layer, one that functions as a control point for all AI traffic flowing through the organization.
They see software evolving to help connect, control and protect a mix of IT systems that enterprises will use to power AI capabilities.
“We see a multi-billion-dollar business opportunity in providing secure access to all things AI,” Ramaswami said.
The geopolitical landscape of AI models has grown competitive, forcing enterprises to assess leading proprietary and open source models. Add to that data sovereignty concerns, which may dominate discussions in Europe and Asia, and concerns about costs and supply are spreading worldwide. IT investment decisions are as complicated as ever.
“Foundation models are expensive…their bill is going up,” Dutta explained.
“The alternative is open, permissible models. But they’re not as good as the foundation models. There’s a big gap.”
That gap is narrowing fast. Highly competitive Chinese open source models are delivering capabilities “almost as close” to top U.S. foundational models at a fraction of the cost, Ramaswami noted. This creates a strategic dilemma.
“China as a country is investing in open source models that have come pretty close to the best models available in the U.S., which are all closed-source foundational models,” he said.
“The result is that Chinese models could be adopted broadly around the world.”
Meanwhile, U.S. foundational models may perform best, they can be expensive and have proprietary architectures, which raises concerns about vendor lock-in and data sovereignty, especially in markets that aren’t in the U.S.
These concerns are evident in enterprises that rely on hybrid cloud IT systems to manage AI capabilities “under their umbrella,” which helps them mitigate vendor dependencies and supply chain risks while maintaining control over proprietary data and intellectual property.
Ramaswami sees an opportunity.
“We need…permissible, safe open source models,” he said.
As AI matures from experimental to business-critical infrastructure, more attention is paid to security. Most companies “now have AI governance in place,” Ramaswami said.
However, the rise of autonomous or agentic AI is forcing a fundamental rethinking of enterprise security architectures. Waves of investment and innovation are rising to meet these growing needs.
Dutta pointed to open source agent platforms like as ClawBot that can act as “a swarm of private agents” capable of performing complex tasks autonomously, including shopping, browsing, accessing various chat systems, all using personal or corporate credentials.
“They can do shopping for you, browsing for you,” he said. “This could really destroy the security apparatus of a company if not protected, if not controlled well.”
There’s a need for intelligent control, something Dutta calls an “AI policy engine” or “AI gateway” that functions as a security and governance layer for all AI interactions.
Dutta explained that an AI gateway layer would control which models can access what data. It would enforce security rules, manage resource utilization costs, and ensure rules adapt to different sovereignty requirements. It would ensure AI agents can connect to LLMs, tools and enterprise data safely and securely.
Think of it as an AI router for your organization, Dutta suggested. Just as network routers manage and secure web traffic, an AI gateway would act as a control point for model traffic, enforcing policies in real-time.
Dutta explained that this AI gateway could provide several functions:
Semantic Routing: Intelligent request routing to different models based on policy. For example, granting Claude access to creative tasks but routing sensitive financial queries to a private, on-premises model.
Security and Data Loss Prevention: Real-time monitoring and redaction of data flows. The gateway could detect and scrub personally identifiable information before it leaves the corporate environment, preventing accidental exposure of sensitive data to external AI services.
Safe Execution Environments: Providing sandboxed containers or virtual machines for “safe execution of agentic code,” ensuring that autonomous agents can’t access systems or data beyond their authorized scope.
The lack of control over credentials poses a significant security risk, Dutta emphasized.
“When an agent receives credentials to access enterprise data systems, there’s currently no control point to verify that code is safe or it’s not leaking it to somebody somewhere else,” he said.
More than ever, IT teams need to follow fundamental principles, Dutta said,
“Enterprises can only focus on what’s invariant, so IT will have to serve their customers, and they have to control the flow of tokens.”
As enterprises move from AI experimentation to production deployment, Dutta said they’ll need infrastructure that can support “a whole new set of applications being built with AI, consuming and generating tons and tons of data.”
Dutta said IT systems will need to provide secure, hybrid IT inferencing and agentic application platforms. These systems are best run close to business data for security and compliance reasons, but still have access to public cloud resources when needed. He sees the AI gateway integral to AI strategies that address control, sovereignty, safety and continuous innovation.
Related:
Editor’s Note: This article is part of a series based on conversations with Nutanix CEO Rajiv Ramaswami and Chief AI Officer Debo Dutta exploring the evolution of enterprise AI.
Learn about the Nutanix Enterprise AI solution and AI Gateway service that provides a unified, secure inference endpoint lets enterprises use cloud-hosted models (and token credits) alongside private LLMs with consistent authentication, observability, and token-based rate limiting.
Ken Kaplan is Editor in Chief for The Forecast by Nutanix. Find him on X @kenekaplan and LinkedIn.
© 2026 Nutanix, Inc. All rights reserved. For additional information and important legal disclaimers, please go here.