Executive Summary 

The integration of Nutanix Cloud Platform with external storage providers like Everpure (formerly Pure Storage) FlashArray and Dell PowerStore provides a disaggregated architecture that allows for independent scaling of compute and storage resources, leveraging the efficiency and performance of NVMe/TCP over standard Ethernet.

Organizations utilizing Fibre Channel (FC) as a storage access protocol to connect to external storage can effortlessly convert their existing infrastructure to NVMe/TCP upon adopting the Nutanix Cloud Platform (NCP) with external storage solution. This document highlights the steps for deploying the Nutanix Cloud Platform (NCP) with external storage, focusing on a strategic migration from legacy Fibre Channel (FC) to the modern, high-performance NVMe over TCP (NVMe/TCP) storage protocol.

NVMe/TCP  Introduction

NVMe and NVMe over Fabrics: The Modern Data Center Standard

NVM Express (NVMe) is a standardized high-performance access protocol ratified by the NVM Express Organization. It was developed as a modern successor to the SCSI (Small Computer System Interface) protocol, which served as the foundation of data center storage for over three decades.

While SCSI was originally designed for servers with limited processing power and mechanical hard drives (spinning disks), NVMe was architected from the ground up to exploit the massive parallelism found in modern multi-core CPUs and NAND Flash-based Solid-State Drives (SSDs). 

The core of the NVMe Base Specification was initially designed to replace legacy interfaces (like SATA and SAS) by connecting storage directly to the system's PCI Express (PCIe) bus. This eliminated the bottlenecks of host bus adapters (HBAs) and reduced latency significantly.

However, the industry soon extended this high-performance protocol beyond the chassis of a single server to function over networks. This extension is known as NVMe over Fabrics (NVMe-oF). Defined in the NVMe over Fabrics Specification, it allows hosts to access remote storage with latency comparable to direct-attached local drives. NVMe-oF supports several network transports, including:

  • FC-NVMe: NVMe over Fibre Channel.
  • NVMe-RoCE/InfiniBand: NVMe over RDMA (Remote Direct Memory Access).
  • NVMe/TCP: NVMe over standard TCP/IP.

The Power of NVMe/TCP

Among the available transports, NVMe/TCP as an extension has seen quick adoption across datacenters by storage vendors like Everpure (formerly Pure Storage) and Dell. A helpful way to understand its role is comparing it to the previous generation of storage protocols:

  • Fibre Channel (FC) utilizes the FC-4 Protocol Mapping layer to encapsulate SCSI commands into Information Units (IUs), which are then transmitted via FC-2 frames.
  • iSCSI functions as an application layer protocol that encapsulates SCSI commands into iSCSI Protocol Data Units (PDUs), which are then transported over TCP/IP.
  • NVMe/TCP in comparison utilizes the NVMe/TCP transport binding to encapsulate NVMe commands into NVMe/TCP PDUs, which are then transported natively over TCP/IP without any SCSI translation making it very efficient.
Figure 01: Protocol Encapsulation Across Storage Transports: Native NVMe/TCP vs SCSI-Based iSCSI and Fibre Channel Figure 01: Protocol Encapsulation Across Storage Transports: Native NVMe/TCP vs SCSI-Based iSCSI and Fibre Channel

Just as iSCSI democratized SAN storage by leveraging Ethernet, NVMe/TCP extends the NVMe protocol across standard Ethernet networks using the TCP transport binding defined in the NVMe TCP Transport Specification. This provides a storage access protocol that is highly performant, flexible, and scalable across the data center fabric. Using NVMe/TCP enables organizations to standardize on Ethernet as the network technology for all purposes within a data center without compromising on performance or scalability. By doing so, it reduces the number of network technologies and equipment required in the data center, simplifying operations and lowering costs.

Why NVMe/TCP

The inherent benefits of NVMe/TCP extend to the base NVMe protocol and are applicable to the extensions over various fabric networks. The two primary factors contributing to NVMe’s superior performance are:

  • Support for a High Number of Queues: NVMe supports a substantial number of queues (64K queues) with deeper entries (64K entries) compared to SCSI (256 max).
  • Efficient Command Set: NVMe employs a compact and efficient command set, unlike the larger and more complex legacy SCSI commands.

The reduced command set enables the NVMe protocol processing code to be nimble and resource-efficient, thereby minimizing CPU load. The support for a large number of deeper queues facilitates parallel IO processing, complementing modern multi-core CPU architectures. Unlike single-queue designs employed by SCSI, which introduce context-switching and locking penalties, NVMe allows each CPU core to directly submit commands to its dedicated queues without locks or contention. This parallelism eliminates bottlenecks, reduces latency, and optimizes throughput. Consequently, NVMe achieves nearly linear scalability for IOPS in relation to the underlying network speed, making it an ideal solution not only for large-scale virtualization consolidation workloads but also modern cloud native and AI workloads.

The Business Case for Migrating to NVMe/TCP

Migrating external storage connectivity from Fibre Channel to NVMe/TCP provides significant operational and performance advantages:

BenefitDetail
Reduce Complexity and StandardizationNVMe/TCP eliminates the need for separate FC fabrics, Host Bus Adapters (HBAs), and dedicated FC expertise. It leverages standard Ethernet and existing operational tooling allowing organization to standardize on Ethernet for all data center traffic.
SimplicityNVMe/TCP is natively L3 routable. This enables simpler scaling across data center locations and more efficient disaster recovery (DR) across geographically dispersed sites using standard, existing IP routing infrastructure.
PerformanceNVMe/TCP delivers low overhead and high parallelism, meeting most enterprise latency and throughput requirements without the need for specialized dedicated equipment.
Future-proofingNVMe/TCP aligns well with cloud architectures, benefits from broader vendor support, and leverages the faster growth trajectory of Ethernet speeds compared to Fibre Channel.

Administrator Operations Comparison

The table below highlights the simplified administrative tasks when using NVMe/TCP with Nutanix and Everpure (formerly Pure Storage) FlashArray compared to traditional Fibre Channel:

 FC (Fibre Channel)NVMe/TCP (Nutanix with Everpure FlashArray)
Switch
  • Add zones
  • Add hosts and target to Zone
  • Create vswitch and configure VLANs
Initiator (Host)
  • Add FC HBAs in each server
  • Configure Multipath policies between server and storage
  • Use general purpose Ethernet NICs
  • Nutanix automatically handles creation of Host NQNs (NVMe Qualified Namespace), discovery and multipath configuration when adding external storage.
Target (Storage)
  • Create host identities
  • Create volumes
  • Configure ACLs
  • Most array vendors allow non-disruptive upgrades for front end ports and enabling NVMeF/TCP. They can also co-exist with FC ports.

Migrating from Fibre Channel to NVMe/TCP

These are the steps required to prepare the FlashArray to migrate to NVMe/TCP.  A FlashArray can run Fibre Channel and NVMe/TCP concurrently, so both protocols can be active at the same time.

  1. Storage Array Preparation:
    • Identify available network interfaces to assign IP addresses for the NVMe Target IPs.  If no interfaces are available, work with your Storage representative to add front-end network cards on the Storage Array controllers.
    • Depending on your performance requirements, be sure to use similar speeds for the network interfaces compared to the Fibre Channel ones. For example, if you have 4 x 16Gb FC interfaces currently enabled you should aim for 4 x 25Gb network interfaces or higher.
    • Enable NVMe/TCP on the available network interfaces.
  2. Networking:
    • Depending on the available networking, existing network switches can be re-used and networking can be consolidated and made simpler. If existing switches do not have enough ports, additional ethernet switches can be set up to scale the network.
    • If the requirement is to keep the storage network separate and have it as a dedicated network similar to fibre channel which have their separate switches, separate dedicated ethernet network switches can be set up for storage traffic that is managed by storage administrators.
  3. Servers (NCI Compute Nodes)
    • Network Interface Cards (NICs): Nutanix can use 10Gb Ethernet NIC cards but highly recommends using 25Gb Ethernet NIC cards on the servers. Ideally, use two dedicated storage NICs.
    • FC HBAs: Fibre Channel HBAs can be kept during the transition phase but should ideally be removed once the migration to NVMe/TCP is complete, as per Nutanix recommendations (e.g., removing unused components before imaging NCI compute nodes).

Networking Best Practices for Compute Nodes

The deployment requires a dedicated storage virtual switch on the NCI compute cluster. Refer to deployment guide for detailed step by step instructions

  • Virtual Switch (vs1): Configure a separate virtual switch dedicated to external storage traffic, ensuring it has at least two physical NICs.  Using LACP (Active/Active) is recommended for best performance. If LACP is not available, select Active/Active w/ MAC Pinning to enable load balancing at the virtual switch layer.
  • MTU: A Maximum Transmission Unit (MTU) of 9,000 bytes is recommended for the new virtual switch used for Everpure (formerly Pure Storage) FlashArray data traffic.
  • Network Interfaces: The network topology generally involves:
    • vswitch0: Nutanix management, Live migration, DR replication, Application traffic.
    • vswitch1: Storage data traffic only.
Figure 02: Network Configuration Figure 02: Network Configuration

The above diagram represents the recommended network cabling for a redundant network configuration between Nutanix and Everpure (formerly Pure Storage). 

Once the physical changes have been made in Compute Nodes, Pure FlashArray and network switches, Nutanix and Everpure (formerly Pure Storage) automatically handles the protocol configuration and management like discovery and multipath policies without any administrator inputs making it very easy and simple for the administrator to manage the solution.

Conclusion: The Path to Modernized Storage

The shift to NVMe/TCP represents more than just a protocol change; it is a modernization and simplification of the data center. By leveraging standard Ethernet protocols on Nutanix Cloud Platform with external storage, organizations can simplify their hardware footprint, reduce specialized management costs, and build an infrastructure ready for the high-demand workloads of tomorrow, from massive databases to AI-driven applications.

 

©2026 Nutanix, Inc. All rights reserved. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names mentioned herein are for identification purposes only and may be the trademarks of their respective holder(s).

Our decision to link to or reference an external site should not be considered an endorsement of any content on such a site. Certain information contained in this post may relate to, or be based on, studies, publications, surveys and other data obtained from third-party sources and our own internal estimates and research. While we believe these third-party studies, publications, surveys and other data are reliable as of the date of this paper, they have not independently verified unless specifically stated, and we make no representation as to the adequacy, fairness, accuracy, or completeness of any information obtained from a third-party.

All code samples are unofficial, are unsupported and will require extensive modification before use in a production environment. This content may reflect an experiment in a test environment. Results, benefits, savings, or other outcomes described depend on a variety of factors including use case, individual requirements, and operating environments, and this publication should not be construed as a promise or obligation to deliver specific outcomes.

This content may reflect an experiment in a test environment. Results, benefits, savings, or other outcomes described depend on a variety of factors including use case, individual requirements, and operating environments, and this publication should not be construed as a promise or obligation to deliver specific outcomes.