Before Nutanix acknowledges any write I/O, it synchronously replicates the data to other nodes in the cluster. All nodes participate in replication to gain full utilization across the system. Only after the data — and its associated metadata — is replicated, will the host receive an acknowledgment of a successful write. This ensures that data exists in at least two independent locations within the cluster and is fault tolerant.
Nutanix allows administrators to dynamically configure different levels of data redundancy for applications running in a cluster. True to the underlying principle of software-defined infrastructure, system policies that define which applications get what level of redundancy are late-bound and applied dynamically to achieve different levels of fault tolerance.
The Virtual Computing Platform incorporates the popular Google-developed Snappy compression algorithm to increase the effective storage capacity of the system up to 4X. Unlike traditional storage solutions that perform compression for entire LUNs or disks, Nutanix compresses data at the sub-block level for increased efficiency and greater simplicity.
Compression policies can be configured to compress data in-line, as it is written to the system. Nutanix can also compress data post-process, or post-ingest, to eliminate any performance impact on the write path.
Snapshots & Clones
Snapshots provide production-level data protection without sacrificing performance. Nutanix utilizes a redirect-on-write algorithm to dramatically improve system efficiency during snapshoting.
Closely related to snapshots is the concept of a clone, which is essentially a writable snapshot. Nutanix uses the same underlying mechanism for performing cloning as it does for snapshots, allowing it to benefit from the same metadata optimizations. The main difference is that when a clone is taken, both the clone and the original become children of the snapshot, meaning the parent snapshot will now have two writable children.
Additionally, Nutanix fully integrates with popular off-load capabilities, including the VMware API for Array Integration (VAAI), Microsoft Offloaded Data Transfer (ODX) and SMI-s.
Shadow Clones enable distributed caching of VM data in a multi-reader scenario. VDI deployments, where many linked clones forward read requests to a central master (base VM), is a prime example. In the case of VMware View, this is called the replica disk and is read by all linked clones. Similarly, in Citrix XenDesktop deployments this is called the MCS Master VM. Shadow Clones improve performance in nearly any multi-reader scenario (e.g., deployment servers, repositories, etc.).
With Shadow Clones, Nutanix actively monitors vDisk access trends. If there are requests originating from more than two remote CVMs, as well as the local CVM, and all of the requests are read I/O, the vDisk will be marked as immutable. Once the disk has been marked immutable, the vDisk is then cached locally by each CVM so read operations are now satisfied locally by direct-attached storage resources.
Storage for virtual machines is thinly provisioned in the system. Administrators can set the capacity of a vDisk, but physical storage is allocated only when required. Administrators can also set a minimum reservation parameter that guarantees the specified amount of storage for a collection of vDisks. This provides for less overprovisioning of storage and more granular control to the administrator.
Download the Nutanix System Reliability Tech Note