Optimizing Public Cloud Infrastructure for Private Clouds
Private cloud attributes of elastic scalability and fractional consumption are inconsistent with the hub and spoke model of traditional storage and compute infrastructure. Proprietary hardware, physical storage controllers, multiple data stores, LUN constraints, and high costs impede the agility that is at the heart of cloud computing.
Nutanix’ software-defined platform mitigates these challenges while both simplifying the environment and reducing risk. Nutanix’ approach was validated with receipt of Best of VMworld 2013 Gold Award in the Private Cloud Computing Category.
Private cloud adoption is growing at twice the rate of public cloud. A recent QuinStreet study of 341 IT professionals found 65% of respondents plan to use a private cloud with another 19% of respondents planning to implement a hybrid cloud.
Private cloud is driven primarily by the desire to align more flexible and cost-effective computing capabilities with facilitating business objectives such as increasing revenues or engendering more customer stickiness. Business units can instantly deploy their own applications with the system automatically provisioning and monitoring the required infrastructure components.
While a software-defined datacenter is optimal for effective private cloud, conventional storage and converged infrastructure solutions relying upon proprietary hardware inhibit this architecture in five primary areas:
• Fractional consumption and Chargeback
• Elastic scalability
• Cost reduction
• Hybrid cloud transition.
Fractional Consumption and Chargeback
The number one barrier to private cloud is a lack of funding. Most organizations still rely upon project-based budgeting for funding their virtualized environments. As resources reach capacity, IT has no option but to ask the next service requestor to bear the burden of required expansion. Pity the business unit with a VM request just barely exceeding existing capacity. IT may ask it to fund a whole new blade chassis, SAN, complex core switch or, still worse, an expensive (so-called) converged infrastructure purchase.
Rather than gaining instant and automatic access to the required infrastructure, the business unit either has to cough up the monies for far more capacity than it requires, or wait until either the next business cycle or other departments fund the purchase. This delay is anathema to the on-demand promise of private cloud.
A staircase infrastructure purchasing structure also makes chargeback much more difficult to implement. Complexity escalates as attempts are made to meaningfully allocate large blocks of storage and compute purchased to accommodate future demand.
Nutanix Approach: Nutanix Virtual Computing Platform (VCP) scales one node (containing both server and storage) at a time, facilitating pay-as-you-grow funding for private cloud along with a much simpler chargeback model.
One of the most important cloud attributes is the ability to scale efficiently on demand. Traditional storage is expensive to purchase, scales in large chunks and is time-consuming to deploy. Scaling an array typically requires the necessity to work around LUN limits, configuration zoning and dealing with multiple new datastores. As a result, deployments can easily require weeks or months – far too long in dynamic cloud environments.
Additionally, multiple layers of compute, storage and network must all be scaled independently with each creating additional silos of resources and involving multiple teams. This incremental complexity often leads to environments that are un-manageable as they scale or that require additional staff to cope with the environment.
Nutanix Approach: Nutanix nodes can be inexpensively purchased, acquired one node at a time, and deployed in minutes. New compute and storage resources are seamlessly added to pools without any reconfiguration of datastores or LUNs (Nutanix doesn’t even utilize LUNs or multiple datastores). If a private cloud needs to be reduced in scale, Nutanix resources can easily, instantly and transparently be redeployed to other use cases.
Gartner recommends that organizations implementing private cloud “think big, but start small”. Start with building services for a specific customer requirement, and evolve from there as IT determines which business units and which services will best fit within a private cloud model.
A phased deployment means uncertainty regarding the ultimate number of users, services and resource requirements. Attempting to accurately purchase the final storage and compute resources at the start of a private cloud roll-out is a hopeless task.
Under-purchasing storage typically requires expensive and complex forklift upgrades down the road. Purchasing more capacity than required not only entails large initial cash expenditures, but also results in utilization of increasingly old technology rather than benefiting from the inevitable cost reductions wrought by Moore’s Law.
Nutanix Approach: The same Nutanix infrastructure purchased for the initial cloud “pilot” can be expanded to accommodate thousands of nodes. Not only is the up-front cost much lower than conventional infrastructure solutions, but as the private cloud grows over the years, organizations will benefit from increased densities of VMs to Nutanix nodes and other enhancements enabled by continuing advances in technology. And Nutanix, given its software-defined architecture, can deliver new features and capabilities to older models through a simple software upgrade.
Private clouds are designed to simplify delivery of IT services, but conventional storage solutions add complexity as they expand. Large private cloud environments may require hundreds or even thousands of datastores, hundreds or thousands of LUNs, and dozens of arrays. Each storage, network, and compute tier has its own management user interface and requires individual administration. And as arrays are added over the years, incompatible models often require yet another management layer. Networking also grows in complexity and requires zoning on the fabric side.
Traditional hub-and-spoke infrastructure is difficult to scale and divides shared storage performance between compute nodes
Every leading data center storage and server incumbent has responded to the demand for simplification by offering Converged Infrastructure (CI) solutions that combine compute, storage and network resources either as a product or as a reference architecture. But while these solutions might enable somewhat faster deployment of infrastructure and complex application stacks, they still entail separate silos of compute and storage. The virtualization team cannot manage the storage environment along with the compute.
Nutanix Approach: Software-defined datacenter paradigms such as virtual appliances and VMware’s NSX are collapsing the traditional image of an access/core/edge switch topology that was previously physical. Nutanix enables this same type of networking collapse in the sense that the virtual switch serves as top-of-rack switch while the top-of-rack switch acts as Nutanix’ core switch.
Nutanix truly converges compute and storage, eliminating redundant hardware and consolidating both server and storage management as part of normal administration utilizing the virtualization console. By pooling resources across all nodes and making them available in real time to any VM, Nutanix eliminates collaboration complexities utilizing a standard Ethernet network as the fabric. Storage is automatically added to the pool while the compute is automatically added to the cluster.
Nutanix scales one node at a time while increasing storage controllers, Read/Write Cache & Compute/Storage capacity
Facilitation of Hybrid Cloud
Hybrid agility requires seamless exchange of workloads between private clouds and public providers. It is difficult to move workloads from proprietary arrays in a private cloud to an unknown architecture at a provider. Additionally, conventional consoles are not designed to manage these hybrid environments.
Nutanix Approach: The Nutanix architecture is completely hypervisor agnostic, and can run multiple virtualization technologies simultaneously. This built-in flexibility provides the ability to run mixed workloads in a single private-cloud environment, and makes it easier to exchange workloads with public cloud providers. The software-defined VCP also enables delivery of additional extensibilities and integrations with public clouds through software.
Bringing Google-Like Infrastructure to Enterprise Datacenters
While Nutanix’ scale-out approach is new for the enterprise, it is hardly new. Google pioneered a SAN-less scale-out architecture utilizing a distributed file system over ten years ago, and this same type of architecture has now been embraced by every leading cloud provider.
A couple of the developers of GFS, including the lead scientist, saw an opportunity to bring the advantages of true convergence to commercial and government enterprises by leveraging the hypervisor itself. They, along with engineers from companies such as Oracle, VMware, Microsoft and Facebook, spent three years developing the Nutanix Distributed File System (NDFS) which is at the heart of the Nutanix VCP.
Whether utilizing a private or hybrid cloud computing strategy, enterprises are increasingly embracing Nutanix VCP to obtain the same type of scalability, resiliency, simplicity and low cost infrastructure that has become the standard for public cloud providers.
2013 Cloud Computing Outlook: Private Cloud Expected to Grow at Twice the Rate of Public Cloud. QuinStreet Enterprise.
Google Throws Open Doors to Its Top-Secret Datacenter. 10/17/2012. Steven Levy. Wired.