nutanix

Breaking the Addiction to Storage

Shrinking server infrastructure to a fraction of its original size has been universally hailed as the biggest infrastructure advancement in the last decade.

Having been a  VMware field systems engineer for half a decade ,  I had the privilege of ushering hundreds of organizations into the world of server consolidation and datacenter optimization. But this wasn’t without its hiccups and bumps in the road.

During my tenure at VMware, I noticed striking and sometimes shocking similarities across many of my customer deployments. Virtualization sparked a tidal wave of interest from every corner of the enterprise, driven in large part by the speed at which VMs could be provisioned. Turnaround times for new workloads were nearly instantaneous. With a click of a button, administrators could  spray workloads onto their infrastructure with abandon,  without realizing the impact to their finite set of physical resources. Naturally, this exuberance bred a new generation of infrastructure phenomena ;  vm-sprawl, vm-stallout, boot storms, backup storms, device latencies, controller saturation, and spindle exhaustion were common ailments that began to plague many of my customers.

Architecting a high performance datacenter requires deep knowledge of workload profiles and skillful balancing of dozens of variables onto a finite set of physical resources.  Virtualization added yet more complexity to this balancing act. Regardless, customers forged ahead, seeking the advantages of server consolidation and the elasticity  to satisfy mushrooming demand for new workloads. The irony, however, was that  many of these elastic characteristics depended on legacy SAN storage, the antithesis of virtualization’s modularity and scale-out properties. Even to this day SAN/NAS  remains a monolithic and rigid design that has not benefited from virtualization’s distributed nature.

So it came as no surprise that as my customers expanded their virtual footprints, the loads on traditional shared storage began to take its toll. If it wasn’t the spindle count, it was the controller’s CPU. Boot storms? Probably not enough buffer cache. High latencies? Time to upgrade the storage fabric or  ante-up for pricey multi-path I/O software. Can’t fit enough spindles into your array? Time to buy a whole new SAN!

As customers kept pouring budgets into shared storage with only marginal results, the financial spotlights began to turn towards virtualization.  Virtualization, the cost savings prodigy, was now responsible for fueling an unstoppable  flow of investment into centralized storage. With few other alternatives, customers begrudgingly hopped onto this infinite storage refresh treadmill.

Wheeling in the latest next-gen storage product helped in most circumstances, but it came at a premium price, increased budgetary scrutiny and worse, it began to brew discontent for the cost of virtualization.  But little could be done to stop virtualization’s momentum,  as its overwhelming benefits outweighed this new era of dependance on shared storage. I would argue, however, that virtualization’s reliance on a two-decades old vertically integrated stack of compute, network and storage is ripe for change.

Well, things are about to change. Nutanix brings forth an exciting new blueprint that will revolutionize the datacenter. One that rips apart this legacy design and smashes these separate compute and storage tiers into a modular building block that will break the addiction from yesterday’s habits. By marrying storage and compute together, all the goodness of server scale-out modularity are now bestowed to the world of shared storage.  No more rip and replace necessary, start small and grow big, without ever needing to forklift your workloads across generational infrastructure. Nutanix  is just what virtualization has been waiting for.