Now Available: Nutanix Enterprise AI with NVIDIA Enable Agentic AI from the Edge to Public Clouds

By Mike Barmonde, Product Marketing Manager, AI

Announced at .NEXT 2025 in Washington, D.C., the latest version of the Nutanix Enterprise AI (NAI) product is available with agentic support for NVIDIA agentic workflows. NAI and NVIDIA NIMTM and NVIDIA NeMoTM microservices and models make setting up an agentic workflow simpler than ever, using NVIDIA Accelerated Computing.

While GenAI chat systems introduce AI responses for humans to act on, AI agents take those responses and automate actions, answering questions, finding solutions, and performing relevant tasks autonomously.

How? Let’s dive in.

What is an AI Agent?

AI agents (also referred to as agentic AI) are AI systems that not only perform specific tasks but also exhibit autonomy, adaptability, and self-awareness. These qualities enable AI systems to operate effectively and be closer to how humans would act. Agentic AI can understand context, learn from experiences, and reason to make decisions that align with broader objectives.

Enterprises have had a relatively short timeframe to implement GenAI. Building a resilient and secure architecture that can also handle post-implementation tasks and management (Day 2 operations) continues to be a challenge. Ensuring that the same architecture can also adapt to the ever-changing AI landscape, like agentic AI, is equally daunting.

Agentic AI solutions aim to mimic human behavior more closely by leveraging LLMs for planning, tools for performing various deterministic tasks, and memory for context retrieval.

An agent workflow includes multiple LLM-type models. For example, a retrieval-augmented generation (RAG) flow can send context to a reasoning LLM, then use the reranking and safety guardrails to determine the best answer and ensure the safety of the response. An embedding model helps integrate vector databases. Together, these workflows can also be delivered as preset ‘blueprints’. Whether you’re focused on AI inferencing, tuning, or training, agents provide new ways of making AI work for you.

How Nutanix and NVIDIA Simplify GenAI

“Simple” doesn’t always mean easy. Advancements in GenAI demand new solutions that may redefine your tech stack, and agentic, in particular, requires a solid foundation with GenAI to succeed.

Nutanix and NVIDIA provide an easy way to help deploy GenAI with Nutanix Enterprise AI. NAI helps create and manage secure endpoints with APIs to connect your GenAI applications to NVIDIA NIM and NeMo models, enabling a resilient and secure model repository for GenAI with Day 2 operations.

Graphic to represent The value of NVIDIA with Nutanix

The value of NVIDIA with Nutanix.

The launch of NAI last November included built-in integration of NVIDIA NIM microservices for the latest models from the NVIDIA API catalog. The benefits of combining NVIDIA NIM with NAI include:

  • Simplified AI Model Deployment: Easily deploy, manage, and maintain GenAI models across various environments.
  • Accelerated Time-to-Value: Speed up your organization's time-to-value for enterprise AI.
  • Operational Readiness: Enjoy operational readiness out of the box with our unified platforms, designed to leverage existing skillsets.
  • Industry-Leading Performance: Experience leading performance and low latency with our cutting-edge technology.
  • Unified Platform: Benefit from a unified platform that simplifies the deployment, operation, and management of large AI models at scale.
  • AI-Ready Storage: Leverage the NVIDIA AI Data Platform with the Nutanix Unified Storage solution as a consolidated data platform for the ingestion, processing, and archiving of AI data.

And what about AI agents? They’re already here.

Enhancing Workflows of Agents with LLMs

NVIDIA GTC 2025 ushered in the new world of agents. Nutanix Enterprise AI integrates agentic workflows for your GenAI tasks with NVIDIA NIM and NeMo microservices and models, such as NeMo Retriever for agentic AI applications and NeMo Guardrails for safeguarding agentic applications. Simplify, control, and automate agent workflows with choice and flexibility in three ways:

Simplify agentic workflows with resilient shared AI services

Combine models by use case with secure endpoints and APIs into a single shared service for multiple applications to access and leverage.

As agentic workflows progress, the need to reuse multiple models and endpoints is key for achieving efficiency and performance across applications. Below is an example of how a RAG model set of endpoints and APIs could work for various apps.

Graphic to represent An example of a single NAI shared service model for RAG that includes NVIDIA NIM and NeMo microservices with secure endpoints that can be consumed by multiple applications.

An example of a single NAI shared service model for RAG that includes NVIDIA NIM and NeMo microservices with secure endpoints that can be consumed by multiple applications.

Control Day 2 operations with agentic function-calling integrations

Some NVIDIA NIM and NeMo microservices models support function calling options, also known as ‘tool calling’. These can query external data sources that are automatically integrated into LLM prompts. These extra features streamline end-user demand for contextful answers by providing logical data where applicable, potentially reducing the Day 2 operational overhead with intuitive responses and automation.

NAI adds a one-click ability to include tool calling for compatible LLMs where applicable. An example below shows how turning on function calling can insert the real-time weather of a specific location:

The sample code of an endpoint’s payload below asks about the weather in Santa Clara, CA, and is checked to include ‘Tool Calling’.

Graphic to represent The sample code of an endpoint’s payload below asks about the weather in Santa Clara, CA, and is checked to include ‘Tool Calling’.

The response is then augmented by turning on the function call, which requests the real-time weather data:

Graphic to represent The response is then augmented by turning on the function call, which requests the real-time weather data

Automate the security of agent workflows with guardrail models

Malicious prompt injections can lead to a complete compromise of an LLM by removing its core guardrails. Combined with the use of sensitive enterprise data and automated agent workflows, the security of enterprise AI agents must be tightly fortified. NVIDIA NeMo guardrails provide several mechanisms to help protect an LLM-powered chat application against common LLM vulnerabilities, such as jailbreaks and prompt injections.

Graphic to represent Nutanix and NVIDIA end-to-end agentic RAG workflow with NeMo guardrails as a centerpiece

Nutanix and NVIDIA end-to-end agentic RAG workflow with NeMo guardrails as a centerpiece.

Other notable improvements:

  • More models from NVIDIA NIM and NeMo microservices
    NAI presents numerous NVIDIA NIM and NeMo microservices for the latest AI models. Pre-validated options offer pre-configured infrastructure settings (vCPU, Memory(RAM), GPU memory, and an inference engine) that have been tested and validated, making their implementation very simple.

You can easily switch between both by toggling the ‘Show only Pre-validated Models’ switch:

Graphic to represent You can easily switch between both by toggling the ‘Show only Pre-validated Models’ switch
  • Deploy more models easily
    NAI includes non-pre-validated models to ensure you have more choices to easily deploy models without configuring their logistics.
Graphic to represent Deploy more models easily

Enabling a RAG Workflow - The Killer App for GenAI

NVIDIA NIM and NeMo microservices can be composed into an agentic RAG workflow. NAI then deploys this workflow via its standardized inference management.

Here are the components of a Nutanix plus NVIDIA agent-based RAG workflow and what each one does:

  1. Nutanix Enterprise AI (NAI), An inference platform and model management repository that deploys your choice of AI models as secure endpoints as a resilient and highly secure solution for Day 2 operations. It deploys the necessary NVIDIA NIM and NVIDIA NeMo microservices models and efficiently provides the compute requirements of these models.
  2. Nutanix Kubernetes Platform (NKP): A cloud-native platform to run Kubernetes® that helps simplify platform engineering by reducing operational complexity and establishing consistency across any environment. It helps deploy components for agentic AI and RAG based on blueprint definitions.
  3. Nutanix Database Service (NDB) for a managed and secure vector database deployment using PostgreSQL.
  4. Nutanix Unified Storage (NUS) and the Nutanix Data Lens SaaS-based data security solution, for high-performing, cross-cloud data control and availability, creating a consistent data experience for enterprise AI data, like secure data pipelines.
Graphic to represent Nutanix and NVIDIA end-to-end agentic RAG workflow with NeMo guardrails.

Nutanix and NVIDIA end-to-end agentic RAG workflow with NeMo guardrails.

Nutanix and NVIDIA Enable Agentic AI

Nutanix and NVIDIA provide what you need to get started with an agentic workflow for RAG with the Nutanix GPT-in-a-Box solution, including infrastructure using NVIDIA Accelerated Computing and Nutanix Kubernetes Platform (NKP), Nutanix Unified Storage (NUS), and NAI as an inference platform for NVIDIA NIM and NeMo microservices.

Graphic to represent GPT-in-a-Box is for everything needed for agentic architecture from Nutanix and NVIDIA, including data services..

GPT-in-a-Box is for everything needed for agentic architecture from Nutanix and NVIDIA, including data services.

Also announced at NVIDIA GTC 2025, NUS is now collaborating with NVIDIA's Enterprise Storage Partner program. With membership in the NVIDIA-Certified Systems Program, NUS is an NVIDIA Enterprise Storage Validated solution and is ready to support your NVIDIA-powered AI workloads and reference architectures like the NVIDIA AI Data Platform.

 

Graphic to represent Nutanix Unified Storage supports NVIDIA GPUDirect Storage and is NVIDIA OVX certified

Nutanix Unified Storage supports NVIDIA GPUDirect Storage and is NVIDIA OVX certified.

Nutanix Unified Storage also supports NVIDIA GPUDirect Storage, helping create a direct data path between local or remote storage and GPU memory, enabling a direct-memory access (DMA) engine near the network adapter or storage without burdening the CPU.

Graphic to represent NVIDIA GPUDirect and Nutanix Unified Storage create a direct path to your data.

NVIDIA GPUDirect and Nutanix Unified Storage create a direct path to your data.

Dive deeper into this with a link at the end of this blog.

Together, Nutanix and NVIDIA become your key AI strategic partners to deploy AI anywhere — whether at the edge, core, or public cloud — depending on your needs.

Nutanix GPT-in-a-Box, combined with NVIDIA Accelerated Computing and Certified Systems, ensures optimal performance without redefining architectures.

Graphic to represent GPT-in-a-Box can be configured with NAI to deploy from the edge to public clouds.

GPT-in-a-Box can be configured with NAI to deploy from the edge to public clouds.

Nutanix Enterprise AI Makes GenAI Simple - Start Today

Nutanix and NVIDIA help demystify AI agents. From simple model delivery to running your AI agents and workloads, we make it simple to make your AI strategy work with a resilient, secure, and operational solution.

So, what’s your next move?

Nutanix Customers Can Unlock a 45-day Trial of NAI

Download NAI for Kubernetes

  • Products:Nutanix Database Service, Nutanix Kubernetes Platform, Nutanix Unified Storage
  • Use Cases:AI ML

May 7, 2025

See NAI with NVIDIA in Action

Take a guided test drive of NAI.

  • Products:Nutanix Database Service, Nutanix Kubernetes Platform, Nutanix Unified Storage
  • Use Cases:AI ML

May 7, 2025

  • Products:Nutanix Database Service, Nutanix Kubernetes Platform, Nutanix Unified Storage
  • Use Cases:AI ML

May 7, 2025

About Nutanix

Nutanix is a global leader in cloud software, offering organizations a single platform for running apps and data across clouds. With Nutanix, organizations can reduce complexity and simplify operations, freeing them to focus on their business outcomes. Building on its legacy as the pioneer of HCI, Nutanix is trusted by companies worldwide to power hybrid multicloud environments consistently, simply, and cost-effectively. Learn more at www.nutanix.com or follow us on social media @nutanix.

© 2025 Nutanix, Inc. All rights reserved. Nutanix, the Nutanix logo, and all Nutanix product and service names mentioned herein are registered trademarks or unregistered trademarks of Nutanix, Inc. (“Nutanix”) in the United States and other countries. Kubernetes® is a registered trademark of the Linux Foundation. Other brand names or marks mentioned herein are for identification purposes only and may be the trademarks of their respective holder(s). This blog is for informational purposes only and nothing herein constitutes a warranty or other binding commitment by Nutanix. This blog contains express and implied forward-looking statements, including but not limited to statements regarding our plans and expectations relating to new product features and technology that are under development, the capabilities of such product features and technology and our plans to release product features and technology in the future. Such statements are not historical facts and are instead based on our current expectations, estimates and beliefs. The accuracy of such statements involves risks and uncertainties and depends upon future events, including those that may be beyond our control, and actual results may differ materially and adversely from those anticipated or implied by such statements. Any forward-looking statements included herein speak only as of the date hereof and, except as required by law, we assume no obligation to update or otherwise revise any of such forward-looking statements to reflect subsequent events or circumstances. Any future product or product feature information is intended to outline general product directions, and is not a commitment, promise or legal obligation for Nutanix to deliver any functionality.  This information should not be used when making a purchasing decision. Our decision to link to or reference an external site should not be considered an endorsement of any content on such a site. Certain information contained in this content may relate to, or be based on, studies, publications, surveys and other data obtained from third-party sources and our own internal estimates and research. While we believe these third-party studies, publications, surveys and other data are reliable as of the date of this paper, they have not independently verified unless specifically stated, and we make no representation as to the adequacy, fairness, accuracy, or completeness of any information obtained from a third-party.