8 Reasons Why Moore’s Law is Essential for Evaluating Enterprise Storage Solutions
While it might not be immediately apparent, Moore’s Law directly correlates to the decline in market share for external storage vendors like EMC, NetApp and IBM.
Next generation technologies like hyperconvergence (HC) are quickly surpassing the capabilities of 3-tier infrastructure. Below are 8 reasons why Moore’s Law should be taken into consideration when evaluating future enterprise storage solutions.
1. A SAN performs best on the day it is installed. After that it’s downhill.
Josh Odgers wrote about how a SAN’s performance degrades as it starts scaling. Adding more servers to the environment, or even more storage shelves to the SAN, reduces the IOPs per virtualization host. Table 1 (from Odger’s post) shows how IOPs decrease per server as additional servers are added to the environment.
Table 1: IOPs Per Server Decline when Connected to a SAN
HC: As nodes are added, storage controllers (which are virtual), read cache and read/write cache (flash storage) all scale either linearly or better because of Moore’s Law enhancements.
2. Customers must over-purchase SAN capacity
When SAN customers fill up an array or reach the limit on controller performance, they must upgrade to a larger model to facilitate additional expansion. Besides the cost of the new SAN, the upgrade itself is no easy feat.
In order to try and avoid this expense and complexity, customers buy extra capacity/headroom up-front that may not be utilized for two to five years. This high initial investment cost impacts the potential ROI. Moore’s Law then ensures the SAN technology becomes increasingly archaic (and therefore less cost effective) by the time it’s finally utilized.
Even buying lots of extra headroom up-front is no guarantee of avoiding a forklift upgrade. A Gartner study, for example, showed that 90% of the time organizations under-buy storage for VDI deployments.
HC: HC nodes are consumed on a fractional basis – one node at a time. As customers expand their environments, they incorporate the latest in technology. Fractional consumption makes under-buying impossible. On the contrary, it is economically advantageous for customers to only start out with what they need up-front because Moore’s Law may ensure higher VM per node density of future purchases.
3. A SAN incurs excess depreciation expense
The extra array capacity a customer purchases up-front starts depreciating on day one. By the time the capacity is fully utilized down the road, the customer has absorbed a lot of depreciation expense along with the extra rack space, power and cooling costs.
Table 2 shows an example of excess array/controller capacity purchased up front that depreciates over the next several years.
Table 2: Excess Capacity Depreciation
HC: Fractional consumption eliminates requirement to buy extra capacity up-front, minimizing depreciation expense.
4. SAN “lock-in” accelerates its decline in value
The proprietary nature of a SAN further accelerates its depreciation. In some cases modest array upgrades are difficult or impossible because of an inability to get the required components.
HC: An HC solution utilizing commodity hardware also depreciates quickly due to Moore’s Law, but with a truly software-defined HC solution, enhancements in the OS can be applied to the older nodes.
5. SANs Require a Staircase Purchase Model
A SAN is typically upgraded by adding new storage shelves until the controllers, or the array or expansion cabinets, reach capacity. A new SAN is then required. This is an inefficient way to spend IT dollars.
It is also anathema to private cloud. As resources reach capacity, IT has no option but to ask the next service requestor to bear the burden of required expansion. Pity the business unit with a VM request just barely exceeding existing capacity. IT may ask it to fund a whole new blade chassis, SAN or Nexus 7000 switch.
HC: An HC unit of purchase is simply a node which, in most cases, is self-discovered once attached to the network and then automatically added to the cluster. Pay-as-you-grow consumption makes it much less expensive to expand private cloud as needed. It also makes it easier to implement meaningful charge-back policies.
6. SANs have a Much Higher Total Cost of Ownership (TCO)
SANs lock customers into old technology for several years. This has implications beyond just slower performance and less capabilities; it means on-going higher operating costs for rack space, power, cooling and administration. It means higher TCO and product acquisition costs.
Table 3 shows a schematic from a customer who replaced a Vblock 320 with two Nutanix NX-6260 nodes.
Table 3: Vblock 320 vs. Nutanix NX-6260 – Rack Space
SAN administrative costs should also be considered, but they are typically more difficult to gauge. They can also vary widely depending upon the type of compute and storage infrastructure utilized.
HC: In addition to slashed costs for rack space, power and cooling, HC is managed entirely by the virtualization team – no need for specialized storage administration tasks.
7. SANs have a higher risk of downtime / lost productivity
RAID is, by today’s standards, an ancient technology. Invented in 1987, RAID still leaves a SAN vulnerable to failure. In some configurations, such as RAID 5, two lost drives can mean expensive downtime or even data loss.
And regardless of RAID type, a failed storage controller cuts SAN performance in half (assuming two controllers). Lose two controllers and it’s game over.
Sometimes unexpected events such as a water main breaking on the floor directly above the SAN can create failure. And firmware upgrades, in addition to being a laborious process, carry additional risk of downtime. Then there’s human error.
HC: Rather than RAID striping, an HC solution includes replication of virtual machines onto two or three nodes. A lost drive or even entire node has minimal impact as the remaining nodes rebuild the failed unit non-disruptively in the background. And the more nodes that are added to the environment, the faster the failed node is restored in the background.
8. SAN Downsizing Penalty
Growth is not the only source of SAN inefficiency; downsizing can be a problem as well. Downsizing can result from decreased business, but also from a desire to move workloads to the cloud. The high cost and fixed operating expenses of a SAN make it difficult to justify reduced workloads.
HC: Customers can sell off or redeploy their older, slower nodes. This minimizes rack space, power and cooling expenses by only running the newest, highest-performance nodes. The software-defined nature of HC makes it easy to add new capabilities.
The storage industry is changing thanks to Moore’s Law. See for yourself how hyperconvergence can free you from storage management woes here.