Blog

Understanding Web-Scale Properties

March 12, 2014 | min

Last week Gartner released an article saying that by 2017, Web-scale IT will be an architectural approach found operating in 50 percent of global enterprises.

Gartner Says By 2017 Web-Scale IT Will Be an Architectural Approach Found Operating in 50 Percent of Global Enterprises

Web-scale IT is more than just a buzzword, it is the way datacenters and software architectures are designed to incorporate multi-dimensional concepts such as scalability, consistency, tolerance, versioning etc.

Web-scale describes the tendency of modern architectures to grow at (far-)greater-than-linear rates. Systems that claim to be Web-scale are able to handle rapid growth efficiently and not have bottlenecks that require re-architecting at critical moments

Web-scale architecture and properties is not something new and have been systematically used by large web companies like Google, Facebook and Amazon. The major difference is that now these same technology that allowed those companies to scale to massive compute environments are being introduced into mainstream enterprises, with purpose-built virtualization properties.

Nutanix Test Drive

 

In an internal discussion, Nutanix CEO, Dheeraj Pandey, he wrote about the key concepts of Web-scale architectures. I used some of his thoughts and expanded into different areas to write this article.

I must however admit that highly scalable distributed systems are a new area to me, and I am going trough a learning process. I will be sharing more of my learning’s over the next months.

The first important thing to remember: Web-scale is not exclusively applicable to SDS (Software Defined Storage); rather it is an architecture model for very large distributed systems.

  • Everything should be in software, running on standard x86 hardware, with no special purpose machines doing one and one thing only. This is where Web-scale intersects with the SDDC (Software Defined Datacenter) for the 1st time. Zero hardware crutches. Taiwan hardware with pure software-based services. A number of services already take this approach, including SDN (Software Defined Network), Virtual Services and SDS (Software Defined Storage).
  • There should be architectural considerations for no single point of failure or bottleneck for management services. Tolerance of failures is key to a stable, scalable distributed system, and ability to function in the presence of failures is crucial for availability. Techniques like vector clocks, two-phase commit, consensus algorithms, leader elections, eventual consistency, multiple replicas, dynamic flow control, rate limiting, exponential back-offs, optimistic replication, automatic failover, hinted-handoffs, data scrubbing, checksumming among others all go towards the ability of a distributed system to handle failures.
  • Web-scale systems should provide elastic services with an embarrassingly parallel approach to systems building (http://en.m.wikipedia.org/wiki/Embarrassingly_parallel). Parallel approaches enable non-disruptively approach to traditionally disruptive tasks, such as rolling or forklift upgrades, always-on clusters, and all workflows always online.
  • Web-scale systems should be able to be expanded and continue to function normally as one unit, instead of relying on multiple deployments of functional units that are not scalable units by themselves.
  • Web-scale systems are built from ground up and should expect and tolerate failures while upholding the promised performance and availability guarantees or service level agreements.
  • Web-scale systems should provide programmatic interfaces to allow complete control and automation via HTTP-based services, for intra- and inter-datacenter communication. These APIs must utilize latency and loss-tolerant protocols with avenues for asynchronous request-responses.
  • Web-scale systems must provide self-defining (and versioned) objects. In the case of SDS, self-defining disk formats with ability to encode and serialize structured data in efficient yet extensible formats, like protobuf, Avro, et al. This way, upgrades of disk data can be done lazily. Web-scale cannot assume a one-shot data upgrade, given the scale.
  • Web-scale systems should have self-describing (and version-aware) services such that different parts of the distributed system can communicate at different versions, without expecting a one-shot upgrade for all components.
  • Analytics software to reduce human interaction. Web-scale infrastructures at large web companies have a 1:10,000 ratio for SRE per machines managed. Enterprises are currently at 1:500 ratio. There is a huge gap that only analytics and automation can fill.
  • Strictly and eventually consistent consistency models with clear understanding of the CAP theorem (Consistency, Availability and Partition Tolerance) (http://en.m.wikipedia.org/wiki/CAP_theorem). I found this article by Julian Browne to be a good staring point to learn more about the CAP theorem.

A good example that is close to my heart is vCenter Server. vCenter should have been designed from the ground-up to be a distributed management platform that provides no single point of failure using a shared nothing architecture. As we know vCenter Server is a critical piece for vSphere clusters and when unavailable may drastically impact operations. The same can be said about Microsoft Hyper-V SCVMM.

While it’s true that hypervisors themselves are standalone units and will operate without management servers, the potential lack of management should not be a probability.

If vCenter had been designed using web-scale principles it would either be a clustered virtual appliance or built into the hypervisor kernel. The more nodes added to the cluster more resilient the solution would become, and when a node was down another node could be used as a management end-point.

Nutanix chose to build data and control planes from the ground up to be a Web-scale distributed system following all the above properties and guidelines. These guidelines not only guarantee resiliency, scalability, consistency, and tolerance to failures, but also ensure a platform to bootstrap future datacenter innovation.