Blog

The Consequence of Success – Competitive FUD

By Prabu Rambadran

Nutanix has been very fortunate to have so many customers embrace our web-scale solutions.  With more than 800 customers across 43 countries, we are at the beginning of an exciting journey.

Success, however, seems to come at a price. Lately, we are attracting increasing attention and competitive FUD (Fear, Uncertainty and Doubt). We normally shake off the misinformation and falsehoods – considering it the price of doing business in a competitive market. But when it misleads customers and causes them to make the wrong technology decisions, it’s not always wise to keep silent. In the spirit of staying professional and ‘above the fray’ we will refrain from naming the company disseminating false information regarding Nutanix solutions, but we do want to set the record straight on a few key points. 

False Claim #1 – Nutanix does not provide inline deduplication, and does not deduplicate alldata

First, Nutanix does not claim to provide inline deduplication. We provide post-process deduplication to optimize capacity, using our MapReduce framework. MapReduce allows us to distribute dedupe processing across all nodes in a cluster so that performance impact is negligible. Distributing operations across the cluster is fundamental to web-scale architectures.

What processing happens inline?  Streams of data are fingerprinted during ingest and stored persistently as part of the written block’s metadata. These fingerprints are used for our de-duplication to HDD (capacity savings) and de-duplication for read cache in SSD and RAM (accelerated performance). We have taken performance into consideration here, as well. We use Intel’s hardware acceleration to compute SHA-1 hashes. SHA-1 is the de facto standard for fingerprinting and, ironically, the same one used by the competitor who inspired this post.

Just as important as understanding de-duplication technology, is knowing when and how to apply it. What is widely known, but conveniently ignored by some vendors, is that not all data needs to be deduped/compressed, or should be. For example, databases, Microsoft Exchange and many other workloads are poor candidates for dedupe. (See:  http://technet.microsoft.com/en-us/library/hh831700.aspx)

Additionally, when you create a clone using array offload capabilities, such as VMware VAAI, Microsoft SMI-s or native array clones, the clones are already space efficient and are created in an non-duplicate state (See VAAI quick clone, SMI-s). So additional deduplication of the base VM will not provide any benefit.

The takeaway is to not get enamored by grandiose claims of de-duplication, but smartly investigate (and evaluate!) when dedupe makes sense in your environment. For those that want to learn more, please consult our Nutanix Bible.

False Claim #2: Nutanix has limited data protection, and configuring failure domains can be difficult

There is so much deception here that it’s hard to know where to begin. First, we leverage a unique tunable replication factor (RF=2 or RF=3) to provide data protection by ensuring there are multiple copies of the data and metadata within the cluster.  We do not rely on any hardware RAID, or even worse, a combination of RAID + RF to ensure data protection. The replication factor can be configured via the HTML5-based Prism UI, and is the exact same for any hypervisor (ESXi, KVM, Hyper-V).  There is no requirement for a special agent or plug-in, you can even do it from your mobile device.  More information on this can be found HERE.

One newly added capability that our competitors have failed to deliver is the concept of Availability Domains, which determines the optimal placement for replicas based upon node, block and rack awareness.  Intelligently protecting data means that you can lose a full Nutanix block (1, 2 or even 4 nodes) and still have copies of the data available.  And, there’s no admin interaction required to enable.

Another benefit of being web-scale: all nodes in the cluster are used for replication and data protection.  As a fully distributed system, there is no pairing of appliances or nodes. The potentially fatal downside of a paired system is that if you lose both nodes in a HA pair, you will have data unavailability or data loss.

Lastly, granularity matters with data protection. Nutanix can perform backups at VM granularity. And, the VMs can be backed up locally, to a remote site or to public cloud. You can also replicate VMs to multiple sites for DR purposes.

False Claim #3: Nutanix’s Native Cloning Impacts Performance

This may be the case for traditional storage vendors, but certainly not Nutanix. Nutanix snapshots are very space efficient, and are based upon metadata pointers.  Each VM/clone has its own copy of its block map, which eliminates and overhead of long snapshot chain depths that can kill performance on traditional storage arrays.

We’re proud of how we’ve engineered our cloning technology, and have lots more to say. But rather than bloating this blog post, it’s best to read more HERE.

There is more that we would like to cover, as the competitive FUD does not end there. But, we will save it for another day. More importantly, we want IT leaders to make well-informed decisions – even if they do not end up deploying Nutanix. To do our part, we strive to be transparent, honest and open about what our solution does do, as well as what it does not.

Lastly, if you cannot get to the ‘truth,’ then test it yourself. Do not let any vendor hide behind unproven claims and competitive FUD. The truth always comes out in the end.