Why Organizations Need a Multicloud Strategy and How to Create One

A multicloud setup allows enterprises to get the best out of the IT architecture, given their unique workload requirements.

By Dipti Parmar

By Dipti Parmar November 12, 2020

In IT architecture, the proof of the pudding lies in the deployment. Cloud computing is ubiquitous not just in the enterprise. In fact, a full 83% of all respondents surveyed by Flexera for their State of the Cloud 2020 report said their organizations where in the intermediate or advanced stages of cloud maturity.

What’s more, a staggering 93% of enterprise organizations reported using multiple public, private, or hybrid clouds. In other words, they already have a multicloud environment in place. Combined with a hybrid cloud architecture, multicloud is the way forward for most businesses, starting now.

Source: Flexera

Just a couple of years ago, industry experts surmised whether multicloud was here to stay or if it was just a fleeting outlier restricted to bleeding edge technology adopters. However, reports such as the above have conclusively proved shown that government agencies, SMBs, and even startups have made a deliberate and strategic decision to implement a multicloud architecture. Even major users of Amazon, Google, and Azure public clouds have taken a +1 approach – as in Amazon +1 – where +1 could be another public cloud or even an on-premises datacenter.

While many organizations see the business benefits of moving to a multicloud infrastructure, thanks to the easy availability of value-added services, effective cloud management is paramount if they want to reduce risks, improve on-demand availability, and scale up quickly.

“Exploiting cloud successfully and safely requires multiple domains to coordinate and develop a business-driven decision framework and best-practice IT operational models,” explained David Cearley, Research Vice President and Fellow at Gartner. “This helps to standardize cloud strategy across an organization, while allowing for an approach that will meet the unique needs of different use cases and business units.”

That’s where a multicloud strategy comes in.

An effective multicloud strategy allows organizations to efficiently distribute their workloads across multiple cloud environments, while enabling seamless data flow to and from on-premises infrastructure or datacenters, so as to get the biggest bang for their buck while mitigating risks associated with the individual cloud components of this architecture. A business-centric cloud strategy pushes forward organizational change in the face of resistance to technological evolution and speeds up the digital transformation process.

Benefits of Strategic Multicloud Deployment

Cloud-native development is picking up pace as organizations transform to an agile, integrated hybrid IT infrastructure. This enables them to take advantage of DevOps and build portable application stacks that are far more versatile than those that run on a single cloud. There are several advantages of planned and effectively managed multicloud operations.

No vendor lock-in: Vendor lock-in is a paradox – and in some cases, a necessary evil – of cloud adoption. While cloud-native apps and services are portable to a fair extent at the moment, many providers attempt to make their platforms “sticky” by adding exclusive features to differentiate themselves from the competition.

No surprise, then, that companies are looking for a way around this. “Most organizations adopt a multicloud strategy out of a desire to avoid vendor lock-in or to take advantage of best-of-breed solutions. We expect that most large organizations will continue to willfully pursue this approach,” said Michael Warrilow, VP Analyst at Gartner.

Multicloud deployments make sure that businesses get the architecture that is most suited to their needs and achieve a fine balance between portability and functionality.

With vendor selection as part of a multicloud strategy, organizations get to pick the public or private clouds that benefit their customers or meet their compliance requirements without being constrained by the functionalities that a particular vendor offers. As cloud services evolve with the pace of innovation, the organization gets the freedom to make agile choices and do away with the need to commit to particular architecture or technology for good.

Reduced risk of service disruption: A single cloud environment leaves the organization vulnerable to the risk of outages or even a disaster due to security or infrastructure issues. A multicloud strategy mitigates this risk by diversifying essential services over different clouds. The organization can retain control over mission-critical apps and data with a private cloud or on-premises infrastructure that integrates with the multicloud implementation.

Further, data and workloads remain safe when disaster strikes. Even if one provider suffers an outage, data and apps can still be available on another, depending on the backup, replication, redundancy, and load balancing setup.

Cost savings: Every cloud – in the sky and in computing – is different. Pricing models take into account the functionality, amount of integration required with existing software systems and physical infrastructure, compliance needs, and so on. Rapid changes in the landscape and lack of standardization and transparency make it difficult for businesses to find the best deals or value for money, even though cloud providers are known to give unbeatable, by-the-minute pricing with little direct negotiation.

Multicloud, however, gives a company the choice to club together the right apps, platforms, and services that meet business needs and functions without the need to compromise on minimum requirements or standards. Organizations now have the option to move workloads that aren’t contributing significantly to revenue to another provider within a few hours.

The right cloud for the job: The wealth of features and customizations offered by every cloud provider for different workloads negates the disadvantages of commoditization. Obviously, some cloud environments are better suited to certain apps and business functions. Even within functions such as storage, an organization might choose to use multiple clouds for different levels of usage, performance needs, and security.

Considerations for Designing and Deploying a Multicloud Strategy

The benefits of a multicloud deployment are clear and it’s hard to imagine getting stuck because of the applications that are made possible by public, private, and hybrid cloud systems today. However, IT teams would do well to take into account the scope and potential challenges of designing a multicloud architecture that’s right for the organization as early as possible. Here are some things that need to be thought through before jumping in and adding more cloud environments to the existing architecture.

Versatile architecture: All applications, tools, and databases under in-house development or being used need to support “any” cloud as opposed to a “particular” cloud. When organizations skimp on building support or integration capabilities into an application, they sacrifice workload flexibility down the line, when perhaps the real need for scaling the application arises. Since the number of cloud providers and capabilities will only increase with time, it makes sense to keep applications in a constant state of flux and updated integrative capabilities, so that IT always has the leeway to pick the most suitable or cost-effective cloud environment for each of them.

The fundamental components of a multicloud architecture can be divided into three layers:

  • Foundational resources – the underlying compute, storage, network, and security elements
  • Workload management – constructs such as VMs and containers and workload lifecycle management frameworks such as Kubernetes or OpenStack
  • Service consumption – applications that decouple IaaS, SaaS, and PaaS by abstracting the underlying workloads and physical and virtual resources

Fortunately, cloud providers and platforms are now bound by reliable and robust standards described and maintained by the Cloud Native Computing Foundation (CNCF). Proprietary technologies developed by major providers – such as Kubernetes, which originated at Google – are also evolving into open source and collaborative standards.

Optimized application stack: Amazon, Google, Microsoft and other major cloud platform vendors have end-to-end offerings and services, including orchestration, database, and monitoring. However, they are barely general-purpose additions to the vendor’s core services, and cannot be used for mission-critical functions involving big data such as large-scale financial transactions, fraud detection, or machine learning. That said, IT teams can attempt to build and optimize enterprise applications to the same level of automation and simplicity of these integrated, built-in tools.

Compliance by region: The geographic location and movement of data fundamentally affects multicloud operations, especially for big business with big data. Organizations need to comply with country and/or regional-level data and privacy regulations, depending on their industry, but the responsibility is not shared by the cloud provider in most cases. There is little scope for modifying SLAs unless the market is sufficiently important for the vendors; therefore, the bulk of the liability for compliance rests with the organization itself.

Unified management: Multicloud enables the management of disparate components of the IT infrastructure as a single, cohesive set of resources, regardless of the physical location of these resources. It is all about operational functionality rather than technological infrastructure. Thus, multicloud is much more than “multiple clouds.” A multicloud environment is characterized by the unification and abstraction of private, public, and hybrid clouds built on and off premises.

Source: Gigaom Hybrid and Multicloud Complexity Report

This ‘single pane of glass’ management enables real-time orchestration and automatic provisioning of compute, storage, security, and network resources while abstracting the complexity of the underlying application and database stacks.

Real-world Multicloud Deployment

Multicloud environments involve a wide spectrum of real-world applications. There are quite a few use cases or functional objectives that organizations might consider while evaluating multicloud solutions or forming a strategy.

Increasing capacity in real-time: Many enterprise applications typically run on-premises or in a private cloud, but when usage spikes, they need additional capacity. During these times, they’re “burst” into a public cloud which provides the extra compute or storage resources. This is termed cloud burst and works especially well for non-critical but performance-intensive applications or non-sensitive data. In this scenario, the company avoids buying extra hardware that would just sit around underutilized most of the time while making sure local resources are always available for business-critical applications.

Development and testing: Another common scenario is when organizations run production applications on-premises but use the public cloud for development and testing. They’d still need the multi-region capabilities and features of a public cloud such as CDN for the production environment. In other cases, an organization already using a public or hybrid cloud might rent out colocation facilities with virtualization options for large-scale testing rather than pay hourly rates to Amazon or Azure.

Running high-performance applications: Some applications are specifically built to run across multiple clouds, especially for resilience, performance, and continuous delivery.

Source: Jenkins

These applications can survive a DDOS attack or a regional or global failure (of a single cloud service).

Disaster recovery: Multicloud environments add a new dimension to redundancy by maintaining an up-to-date copy of data or apps on a different cloud platform from a different provider. The multicloud strategy needs to define a Recovery Point Objective (RPO) that specifies the threshold of data loss or downtime that the workload can sustain, depending on the business function.

Cloud arbitrage: This is a pure cost-optimization model where workloads are dynamically shifted to the cloud system with the best pricing structure at any given time. Organizations that have skilled teams and tools will aim to move certain applications or even parts of applications to the most cost-effective cloud available, depending on the compute resources needed at the time.

For this, not only do they need to be aware of the differences in pricing and functionality but also be able to test production performance in real-time. For example, IT can measure the request-response time for the same application running in the same set of regions over different clouds over a preset duration.

Source: Solace

Enterprises can consider using a cloud broker service to gather/analyze data and conduct testing, move apps between different cloud vendors, facilitate integrations among various cloud offerings, and improve relationships (read, fortify SLAs) with cloud providers.

Architect for Success

Digital transformation and data are driving the adoption of hybrid multicloud systems, which promise flexibility, scale, and savings. Automation of multicloud workflows lets companies manage disparate workloads and integrate DevOps processes to accelerate innovation and bolster agility.

And yet, the cloud is not a silver bullet. Organizations need to stick by these guiding principles and develop a strong and stable multicloud strategy to meet the ever-increasing operational demands of global business.

Featured Image: Pexels.com

Dipti Parmar is a marketing consultant and contributing writer to Nutanix. She writes columns on major tech and business publications such as IDG’s CIO.com, CMO.com, Entrepreneur Mag and Inc. Follow her on Twitter @dipTparmar or connect with her on LinkedIn.

© 2020 Nutanix, Inc. All rights reserved. For additional legal information, please go here.