Back to Basics!

 

June 12, 2012 |
| min
Direct Attached Storage (DAS)

Simplicity and minimalist design are in vogue these days. Gone are the days of complicated and convoluted products. As an example, look at Apple, which is setting trends by taking product design motivation from Braun, an established German consumer goods company, and simplifying how users interact with technology. Other examples include minimalistic shoes from Nike, New Balance, and Inov-8 and furniture from Ligne Roset and Herman Miller, all of which which emphasize clean lines and simple designs.

These examples show that simplicity is the key to thrive in today’s market.

We can apply this same reasoning to storage technologies today.

Not too long ago, the only available storage solution was direct attached storage (DAS). Compute was directly attached to its storage via ATA, SATA, eSATA, SCSI, SAS, and in some cases fibre channel. The system admin was also responsible for managing the storage.

Life was simple.

Direct Attached Storage (DAS)

DAS was an effective solution provided two conditions were met: 1. the amount of data was easily contained in the amount of space provided in the system and 2. an effective replication strategy was available (such as software mirroring with Sun Solstice or Veritias Volume Manager, hardware RAID, or tape backup). The main disadvantage of the DAS approach was its inability to share data or unused resources with other servers. Although this drawback was indirectly addressed with the wide acceptance of protocols like NFS, SAMBA, and CIFS, it still provided an opportunity for storage vendors to concoct solutions like SAN.

The advent of the SAN increased hardware requirements and created a wide range of complexities in implementation. The role of the system admin was split into two separate roles: the storage admin responsible for the SAN, which comprises storage arrays and storage switches, and the server admin who manages the compute resources. Now the compute admin has to be in constant communication with the storage admin to effectively manage storage for the compute environment. More communication, more touch points, and more layers of equipment are required to get a simple end result: compute reading from and writing to disk.

Storage Area Network

The requirements to achieve a simple disk read/write and to maintain a “golden copy” for disaster recovery led to the most complicated and expensive storage solution designed, the innards of which are as cryptic and secretive as the workings of an intelligence agency.

The storage admin does not have access to the inner workings of the SAN and only interacts through a set of commands for carving out disk space and allocating it to compute.

With Unified Compute Systems, Cisco was one of the first companies to realize that the compute/network/storage solutions available today were overly complicated. To address this complexity, they developed a converged protocol (fibre-channel over Ethernet or FCOE) and a simplified approach to provision and manage the disparate technologies with a single management entity known as UCSM. Integrated managment could go only so far to achieve real simplicity, however, because in reality the environment still consisted of multiple hardware devices that still needed to be managed separately.

Solutions like vBlock were cobbled together to provide the illusion of seamless integration between hardware and virtulization software, but in reality, the complexity of the whole environment was being obscured, not resolved.

Users ended up paying exorbitant amounts to purchase the hardware and then having to also pay consulting and services fees to have these solutions designed and implemented by their solutions architects. Even though this was an incremental step towards simplification and convergence, it still fails to provide simplicity and true convergence of storage and compute.

Today, we have learned from our follies and ineffciences and realised that the compute and storage are best when they are converged–precluding the need for complex SAN solutions.

The rapid development of hardware and software in tandem has allowed Nutanix to virtualize the datacenter without requiring a SAN. Nutanix provides a distributed and replicated yet simple solution for acheiving the end result of completing a read/write with 40-60% CAPEX reduction. It achieves these ends by eliminating expensive network storage while providing advanced shared-storage capabilities.

Nutanix does this while still delivering the performance, scalability, and data management features that are required in a enterprise environment. Nutanix Complete Cluster is an integrated hardware and software solution that provides the complete server and storage capabilities that you need to run virtual machines and store their data. This building block approach allows you to start with what you need now and add additional blocks as your needs grow to scale from a single system to as many as are required for your environment.

img-back-to-basics-3

In essence the disadvantages of DAS have been resolved by Nutanix, and compute resources can be shared and used to their full capacity.

Life for the system admin can now go back to being simple.