Change is the only constant in this world, is a cliche that we’ve all heard. But boy, has that been true for enterprise data centers over the last 5 years! x86-based virtualization has turned things upside down on how IT is done. What hasn’t changed? From Intel’s silicon to Cisco’s switches to the titles of IT folks, everything is different. I regularly meet virtualization architects, virtualization admins, cloud architects, etc. – titles that were unheard of just five years back. I dug around on LinkedIn and found the following:
|Keyword||No. of Results|
All these roles have popped up only in the last five years! Imagine how this is changing the organizational dynamics in IT departments, big or small. Moving from a world where there were silos of storage admins, network admins, server admins, application admins, things are starting to blur and the virtualization folks are becoming the nerve center of IT.
Now let’s talk technology. A long time back, when VMware first came out with x86 virtualization, Intel’s and AMD’s chips were blissfully unaware of it. And this meant slower performance for virtualized apps. As everything went through VMware’s binary translation layer, there was a significant overhead to virtualizing apps. As VMware gained ground, AMD and Intel took note and added virtualization awareness to their chips. AMD came out with AMD-V and Intel with VT-x. This meant that the hypervisor could pass through more and more instructions natively to the processor and consequently, the performance overhead reduced significantly. Today virtualized apps run at close to native speeds. In fact, you can find VMware benchmarks that show how applications can sometimes run faster on VMware than on raw hardware due to better hypervisor caching! But I digress…
On the networking side, Cisco and other networking vendors have had to change as well. Cisco now provides switching capabilities in a software form (Nexus 1000V) – who would have thought of running a fully virtualized switch 10 years ago! Similarly, Cisco switches now have VM awareness.
While most of the layers in the infrastructure stack have changed, one layer has stayed relatively static – the storage layer. There have been attempts to push virtualization awareness down the storage arrays but that only offers a band-aid to the profusely bleeding wound that storage has come to become! The winds of change dictate that we take a fresh look at how to solve the storage challenge for virtual machines. Is making storage virtualization-aware good enough? We believe that’s just a feature all storage arrays need to have. How about adding flash? That should speed things up, right? But hey, your thin network pipe can’t support all the IO flash can deliver. The point is that there are more fundamental questions around suitability of a network-storage based architecture in which a large number of VMs have to pound a smaller numbers of LUNs sitting across the network. Given the speed at which VMs are proliferating, the rate at which data is growing, a network-storage based architecture has become the bane of the virtualized datacenter.
While all these issues have come to cloud the storage layer, there has been a silver lining too. To overcome the storage issues, Google, Yahoo, Amazon, Facebook, Microsoft (Windows Azure) figured out a way to ditch network storage as they built their next-generation datacenters over the last 5-10 years. Like them, are you ready to take off the (expensive) blinkers that SAN vendors have put on you? Are you ready to think outside the box and dare to nix network storage in favor of what makes sense for virtualization? If the answer is ‘yes,’ we’d love to talk to you.
Stay tuned for more updates as we unveil Nutanix over the next few months.