Embracing Open Standards: Why NVIDIA’s GPU DRA Driver Donation to CNCF is a Win for Enterprise AI

By Steve Carter

At Nutanix, we believe that the future of cloud-native infrastructure must be open, collaborative, and capable of handling the immense scale of modern AI. That is why we welcome NVIDIA’s move to officially donate its GPU DRA Driver to the Cloud Native Computing Foundation (CNCF).

This move signals a commitment to vendor-neutral, community-driven development for AI infrastructure, and it perfectly aligns with our vision for seamless, scalable Kubernetes® management.

Under the Hood: What the GPU DRA Driver Does

To understand why this matters, we have to look at how Kubernetes interacts with high-performance hardware via its Dynamic Resource Allocation (DRA) framework:

  • Hardware as Manageable Resources: It exposes NVIDIA GPUs and Multi-Node NVLink directly as manageable resources.
  • Declarative APIs: This new approach replaces the traditional, limited device plugin API with a richer, declarative interface. By reflecting the capabilities of the underlying GPU hardware into standardized APIs, it allows high-level Kubernetes components to easily unlock advanced features.
  • AI-Scale Performance: Ultimately, it is designed specifically to meet the high-performance demands of modern AI/ML workloads at scale.

Why Community Ownership is a Game-Changer

Transitioning to a community-owned CNCF model enables the industry to collectively ensure that NVIDIA GPU orchestration follows a vendor-neutral direction at the software layer. By embracing open governance, this shift brings several benefits to the entire cloud-native ecosystem:

  • Unified Standards for AI: The driver will act as a reference implementation for the Kubernetes AI Conformance program. This sets the standard for open, community-defined standards for AI hardware acceleration moving forward.
  • Enhanced Interoperability: Placing the driver under SIG Node stewardship ensures tight alignment with core Kubernetes development, helping to ensure consistent API behavior.
  • Collaborative Innovation: Partners across the industry can now collaborate on roadmap and maintainership to drive features that benefit the entire cloud-native ecosystem.

The Nutanix Connection: Powering NKP and NAI

At Nutanix, we are deeply invested in pure, upstream Kubernetes and reducing the friction of deploying enterprise AI. This CNCF donation strengthens the foundation that Nutanix Kubernetes Platform (NKP) and Nutanix Enterprise AI (NAI) are built upon.

NKP is composed of upstream CNCF projects, offering a unified, vendor-agnostic platform for deploying and securing Kubernetes clusters across hybrid and multicloud environments. Because NKP embraces open standards by design, the integration of community-driven DRA features enables our customers to efficiently orchestrate GPU resources without lock-in or complex, proprietary workarounds.

Furthermore, NAI simplifies the deployment of Large Language Models (LLMs) and secure AI endpoints on any CNCF-certified Kubernetes cluster. As the DRA driver standardizes how GPUs and multi-node interconnects are exposed, NAI can leverage these optimized APIs to help drive high performance, predictable costs, and highly efficient AI inferencing at scale.

We are eager to support this vendor-neutral direction and participate in the upstream working groups to drive the next wave of hardware-aligned scheduling.

©2026 Nutanix, Inc. All rights reserved. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. Kubernetes is a registered trademark of The Linux Foundation in the United States and other countries. All other brand names mentioned are for identification purposes only and may be the trademarks of their respective holder(s)..