Solutions for the Federal Government

TAKE A BREAK,
THE COFFEE IS ON US!

JULY 25, 2014 Nutanix 101 for Federal Government

In today’s budget-constrained environment, the U.S. federal government has attempted to make computing more efficient and less expensive by reducing the size of its data center portfolio. Since the Federal Data Center Consolidation Initiative (FDCCI) was first launched in 2010, efforts to close hundreds of data centers and streamline government IT operations have fallen short of the goal set by the Office of Management and Budget (OMB). This has forced government officials to look at data center consolidation in a different way – changing the focus from reducing the overall number of government data centers to reducing the cost of data center ownership. This represents a fundamental shift in focus from data center consolidation to transformation.


The Nutanix Solution for the Federal Government

Nutanix offers a perfect solution to meet OMB’s new requirements for the government’s NextGen data centers. The Nutanix Virtual Computing Platform can transform any data center from an unwieldy, expensive, overcomplicated IT infrastructure to an efficient, cost- effective virtualization endeavor, enabling government agencies to successfully meet their missions.

  • Nutanix enables the federal government to operate more efficiently and cost-effectively by providing hyper-efficient, massively scalable and elegantly simple data center infrastructure solutions.
  • Nutanix solutions are easy to implement, delivering increased time to value and providing near immediate return on investment.
  • Nutanix is a pioneer of converged infrastructure, eliminating the need for complex storage networks and central storage systems.
  • Nutanix can reinvent the government data center by leveraging many of the advanced software technologies that power leading Web-scale and cloud infrastructures, such as Google, Facebook and Amazon.
  • Nutanix has 60+ government customers worldwide, indicating experience and a strong understanding of the government environment.

Space

For 80% of customers considering virtualized initiatives, rack space is a concern. Why? Because datacenter space is expensive in both real cost and opportunity cost. It is more expensive per square foot than standard office space (cubes, offices, break rooms, etc.) because it requires more power, cooling and security. As for opportunity cost, space used for a datacenter means that much less space for cubicles, offices, or break rooms.

If a datacenter becomes filled, it must be emptied or expanded if new initiatives are to be hosted. What fills a datacenter? Racks! What fills racks? Infrastructure! The servers, switches, storage, and security components that host the application components. Thus the more infrastructure required for hosting each of your initiatives, the sooner you will run out of datacenter space. And if you're already low on datacenter space when you start your initiative, you need to be particularly cognizant of the amount of rack space required to host the infrastructure components required for your initiative.

Nutanix' hyper-converged architecture is second-to-none in maximizing compute, storage and IOPS per Rack-Unit (RU). Thus the reason that prospects who are ‘Space challenged' strongly consider Nutanix to host their virtualized initiative.

Weight

Most customers pay little attention to the weight of the infrastructure required to host their virtualized initiative. But for some, weight is critical. In the tactical DoD, where initiatives need to be hosted and accessed in forward-deployed locations, weight is a factor in shipping components overseas as well as a factor as the components must be hand-carried and configured in challenging terrain and weather conditions.

By using Nutanix, customers concerned with weight combine server, storage and switch components into a single platform that is lighter and far less cumbersome than traditional architectures. Nutanix DoD customers have reduced the number and size of ruggedized cases required in-theater, and in many cases have dropped from 4- and 5-person carry to 2-many carry.

Power

The "big four" in delaying virtualized initiatives are power, cooling, cost, and complexity (more on the latter three later). Power shortcomings can delay any virtualization effort by months. Why? Because the infrastructure required to run VMware, Citrix, or KVM draws massive amounts of power. For example, an HP C7000, which has sixteen dual-socket blades, has six 2400-watt power supplies. But for some of VMware's most important features (HA, vMotion, DRS) to work, servers aren't enough — a SAN is required as well. So add two 800 watt power supplies to power a couple of NetApp controllers, if you want to use vSphere Standard or higher. Then add another few power supplies for the disk shelves and the switches that connect the servers to the storage. You're looking at some serious power draw to host that virtualized initiative! So much power, in fact, that many customers decide that they are going to buy VMware, but they delay their purchase by 3-6 months while they await the installation of new power circuits in their datacenters by their local power company.

Nutanix is the choice for customers who have limited power in their datacenters, or who are concerned with the cost or ecological impact of unnecessary power use. That same 16,000+ watt infrastructure from HP and NetApp could be replaced by four Nutanix blocks drawing a maximum of 1400 watts each (including their redundant power supplies) — thus a total of 5600 watts required as compared to 16,000+ watts. And Nutanix customers can get started tomorrow, without the need to delay their project by months while awaiting power increases.

Performance

For most virtualized initiatives, performance is critical. There are many measurements used to attempt to predict the speed of each component of the infrastructure. Gigahertz and number of sockets and cores measure CPU speeds in servers. For storage, disk RPMs and number of spindles are frequently discussed relative to spinning disk, IOPs are common measures applied to flash storage, and controller quantities and speed have profound influence. As for the network fabric between the servers and storage, bandwidth and protocol are commonly viewed as the best predictor of its performance.

This complexity in attempting to measure expected performance is eliminated by simplifying the infrastructure. Rather than hosting the virtualized initiative on multiple components (each with a differing role), host on a single, converged virtualization platform. Eliminate the need for any network fabric by attaching the CPUs, RAM, flash and spinning disk to the same motherboard. Have all components communicate at bus speeds. Keep "hot" data in the fastest tier, and "cold" data in the slower tier. But most importantly, combine three tiers into one, and eliminate the need to balance performance tweaks of one variable with those of another. Host on a single appliance of horsepower. Measure once. As your initiative grows, add another appliance of horsepower.

Politics

In today's workplace, politics are a given. Even in "team environments", human nature has all of us competing as individuals to ascend org charts and pay scales, and fighting to keep ourselves relevant. In the datacenter, this manifests itself in the form of "rice bowls" or "fiefdoms" — the server team, storage guy, network admin, desktop lead, …you name it, all want their opinion heard, their role to be critical, and ideally to be the hero of a successfully delivered initiative.

Converging servers, storage, and network fabric onto a single virtualization platform that cannot be categorized as any one of the three eliminates a significant breeding ground for politics. Individuals have fewer areas to "hole up" or draw lines in the sand. Without all of the moving parts and complicated decision-trees, IT staff can focus on productive endeavors that move the quickly initiative towards success in a far shorter timeframe.

Cooling

There is a direct correlation between power and cooling requirements — and for this reason, cooling lurks in the background as a threat to any new virtualization initiative. Power seems to carry more glamour, and is asked about far more often as IT personnel consider their infrastructure. But a lack of adequate cooling in the datacenter can be just as much of a threat to a customer's deployment timeline.

Heat is a byproduct of power consumption, and infrastructure components can only tolerate a certain temperature before they will begin to break down. A 3x reduction in power draw (as mentioned above under "Power") means a proportional reduction in cooling required. This reduction can translate into dollar savings, reduced ecological impact, or time savings in the form of not waiting three months for a contract to be awarded to a cooling contractor for improved air conditioning in the datacenter.

Cost

Cost is the number one killer of virtualization initiatives. In the earliest phases of any IT effort, ROI assessments are a given. If investment outweighs return, there is considerable likelihood of cancellation before a single server or desktop is virtualized. For the past ten years, the single greatest cost of any virtualized initiative has been storage. Close behind are the server and network costs. Manpower and training to manage the storage, server and network infrastructure are also major contributors to the "investment" column that must be offset by "return" if an initiative is to see the light of day.

Nutanix customers cut their CAPEX and OPEX costs by 60% or more when compared to traditional infrastructure. The converged infrastructure means fewer components are required, significantly reducing hardware costs. Nutanix "Heat Optimized Tiering" automatically moves IO hungry virtual machines into flash storage and idle virtual machines into spinning storage, so that customers get the performance of flash when needed, yet they avoid the cost of hosting "cold" data on expensive flash disk.

Complexity

Picture yourself in a data center, with multiple components of your virtualization pilot arriving from different vendors, on different days, with missing parts, and a ten-page bill of materials. If it's going to take eight weeks to even test drive the solution, you can pretty much forget about a green light for the project.

Speculation

Unlike traditional architectures, Nutanix doesn't require that you guess or speculate. Because of Nutanix' modular scaling characteristics and automatic node clustering, Nutanix customers are able to start small, with a few nodes supporting a pilot-sized initial deployment, and then scale simply and without risk… scaling to massive enterprise deployments in increments of as little as one node at a time. This enables customers to invest in additional infrastructure only when needed; and purchasing decisions can be made on facts and experience rather than guesses and speculation. At Nutanix, we don't ask you to cross your fingers and hope for the best.

Scaling

Agencies are typically given two undesirable alternatives: risk buying all infrastructure up front (ignoring the definition of "pilot"), or pilot on less expensive, non-production infrastructure, then rip and replace (to untested production hardware) when the pilot runs out of horsepower. More often then not, they choose neither.

Why are agencies forced to ponder two poor options? Because traditional infrastructures don't scale well. Because they don't scale well, their vendors offer the soft drink model — Small, Medium, Large, and XL — and they do their best to sell you the Large or XL well in advance of you ever needing it. They'd love for you to pay no mind to the fact that you will receive zero ROI for most of the lifecycle of their device, if you ever grow into it at all. And if you choose wrong and need to change? Bust out your budget for a rip and replace.

How does Nutanix help?

Nutanix doesn't ask you to choose between two terrible alternatives. We don't need you to because linear scaling is at the heart of the Nutanix Virtual Computing Platform. Nutanix clusters start with as few as three nodes, and scale to hundreds of nodes through automatic clustering and aggregating of resources. Start small and scale using facts, only when you achieve increments of success. Don't be forced into ridiculous choices because of their platforms' shortcomings. Take a look at Nutanix.


Key Federal Customers