Blog

Flow Virtual Networking is Now Supported for Nutanix Cloud Clusters (NC2) on AWS

By Dwayne Lessner, Principal Technical Marketing Engineer

June 3, 2024 | min

With the release of the Nutanix AOS™ 6.8 software, Nutanix now fully supports the Flow Virtual Networking™ solution with the Nutanix Cloud Clusters™(NC2) platform on the AWS® cloud. This completes the story for consistent operations across hybrid multicloud environments.

IT operations teams can use a familiar user interface or APIs, regardless if you’re deploying a Nutanix cluster in a private datacenter or in one of the supported hyperscalers with NC2. Networking and infrastructure management are no longer single silos between environments. Customers can focus on business value instead of the headaches that come with juggling different environments.

Consistent Network Operations for hybrid multicloud

Flow Virtual Networking is a software-defined solution that provides multi-tenant isolation, self-service provisioning, and IP address preservation using VPCs, subnets, and other virtual components that are separate from the physical network for Nutanix AHV® clusters.

It integrates tools to deploy networking features like VLANs, virtual private cloud (VPC), VPN, Layer 2 virtual network extension using VPN or virtual tunnel end point (VTEP), and BGP sessions to support flexible app-driven networking that focuses on VMs and applications.

You can even take Flow Virtual Networking for a free test drive in a private datacenter and in the cloud. The key value for application developers is that the network substrate looks the same whether the app is deployed on-premises or in the cloud.

With each test drive instance, Flow Virtual Networking APIs spin up a new isolated VPC and floating IP for seamless access to the app. Network policies for every VPC control inbound and outbound traffic. Security is maintained through consistent operations without misconfigurations occurringing on the part of overtaxed administrators.

Flow Virtual Networking in AWS introduces a new deployment wizard in the NC2 portal for the required Prism Central network controller. The wizard provides options to deploy Prism Central in highly available configurations with different PC VM resource sizes based on your use case.

With the deployment of Prism Central, a new AWS security group is also created to give you native control over access within the rest of the AWS environment. Cloud administrators will love that they still have native controls with their NC2 deployment. Flow Virtual Networking for AWS makes life easier for IT operations teams who manage complex environments. If you want a deeper look at technology, please keep reading or pass this article on your cloud teams.

A Deeper Look for Cloud Professionals

You need two new subnets when deploying Flow Virtual Networking: One for Prism Central and one for the Flow Virtual Networking external subnet. The Prism Central subnet is automatically added to the Prism Element instance where it’s deployed using a native AWS subnet. The Flow Virtual Networking subnet is also added to every node for each Prism Element instance that’s managed by Prism Central using a native AWS subnet.

The Flow Virtual Networking subnet acts as the external network for traffic from a Nutanix VPC. A VPC is an independent and isolated IP address space that functions as a logically isolated virtual network consisting of one or more subnets that are connected through a logical or virtual router. The IP addresses in a VPC must be unique but IP addresses can overlap across VPCs.

In AWS, Nutanix uses a two-tier topology to provide external access to the Nutanix VPC. One of the NC2 hosts has a peer-to-peer link network setup that acts as an entry point outside the transit VPC. This peer-to-peer link network is a private network that doesn’t overlap with any of your Nutanix user VPCs or the AWS environment. The peer-to-peer link is elected automatically and will change NC2 hosts if there is a failure. This exit point is attached to the Flow Virtual Network subnet, which has its own dedicated ENI on each NC2 host.

Deploying the first cluster in AWS with Flow Virtual Networking automatically creates a transit VPC that contains an external subnet called overlay-external-subnet-nat (OEN-NAT). The transit VPC requires this external subnet to send traffic to the peer-to-peer link. You can create an additional external network in the transit VPC for routed traffic, also known as no-NAT networking.

The Flow Virtual Networking native AWS subnet consumes the source network address translation (SNAT) IP addresses and any floating IP addresses that are given to user VMs that need inbound traffic to enter the user VPC.

Each NC2 bare-metal node consumes the primary ENI IP address, and 60% of the native Flow Virtual Networking subnet CIDR is available for floating IP addresses that are seen in the AWS console as secondary IPs on the ENI. The total number of floating IP addresses available depends on how many Nutanix VPCs you create and the number of NC2 nodes in the cluster.

 

AWS Infrastructure

Nutanix VPC traffic uses a peer-to-peer link to travel to the AWS network.

Traffic exits the transit VPC through the peer-to-peer link hosted on one of the selected NC2 nodes. Although all NC2 nodes have the Flow Virtual Networking subnet connected, the peer-to-peer link is only active on one host.

Two VMs are located in the Nutanix VPC on Node 2, and the UserVPC Redirect Chassis for those two VMs is located on Node 1. After traffic exits Node 1, it uses a generic network virtualization encapsulation (GENEVE) tunnel to reach the host with the peer-to-peer link, where traffic exits to AWS.

The peer-to-peer link is advantageous because no other VMs are needed to act as external gateways for the VMs running on the cluster. The peer-to-peer link is automatically managed by Nutanix and has full access to an AWS ENI bandwidth without requiring another gateway VM.

User VPC Rdirect Chasis

Nutanix traffic flows from Node 1 to Node 2 and from Node 1 to Node 3 through a UserVPC Redirect Chassis.

In the transit-VPC page in Prism Central, you can see which NC2 bare-metal node hosts the peer-to-peer link by clicking Associated External Subnets in the Summary tab.

On-Premises Connectivity

After you deploy the cluster, you can set up a VPN gateway in AWS and create a site-to-site VPN connection. The following figure shows a high-level overview of a VPN connection for a typical NC2-on-AWS deployment. If you’re not using Flow Virtual Networking with routed traffic (no-NAT), you can use a standard virtual network gateway in AWS. You need the transit gateway to set a static route for traffic to the Nutanix user VPC. If you’re using Flow Virtual Networking, you need one subnet for the bare-metal node, one for Prism Central and one for Flow Virtual Networking.

No-NAT networking using a transit gateway to attach a VPN gateway in the transit gateway

No-NAT networking using a transit gateway to attach a VPN gateway in the transit gateway.

The AHV hypervisor needs outbound access to the NC2 portal through an internet gateway or an on-premises VPN with outbound access. Your Nutanix cluster can sit in a private subnet that can only be accessed from your VPN, which limits exposure to your environment. As you use the NC2 portal to add and remove AWS nodes based on the health of the system, be sure that redundant paths are available for outbound internet access.

The automatic deployment of Flow Virtual Networking makes it easy to start using your NC2 cluster right away while giving you an incredibly simple and unified way to manage your private datacenter and cloud resources. Please check out the resources below to learn more about the capabilities of Flow Virtual Networking and NC2.

©2024 Nutanix, Inc. All rights reserved. Nutanix, the Nutanix logo and all Nutanix product and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names mentioned herein are for identification purposes only and may be the trademarks of their respective holder(s). Certain information contained in this presentation may relate to, or be based on, studies, publications, surveys and other data obtained from third-party sources and our own internal estimates and research. While we believe these third-party studies, publications, surveys and other data are reliable as of the date of this paper, they have not independently verified unless specifically stated, and we make no representation as to the adequacy, fairness, accuracy, or completeness of any information obtained from third-party sources.

This post may contain express and implied forward-looking statements, including but not limited to statements regarding our plans and expectations relating to new product features and technology that are under development, the capabilities of such product features and technology and our plans to release product features and technology in the future. Such statements are not historical facts and are instead based on our current expectations, estimates and beliefs. The accuracy of such statements involves risks and uncertainties and depends upon future events, including those that may be beyond our control, and actual results may differ materially and adversely from those anticipated or implied by such statements. Any forward-looking statements included herein speak only as of the date hereof and, except as required by law, we assume no obligation to update or otherwise revise any of such forward-looking statements to reflect subsequent events or circumstances. Any future product or product feature information is intended to outline general product directions, and is not a commitment, promise or legal obligation for Nutanix to deliver any functionality.  This information should not be used when making a purchasing decision.