Organizations moved to a hybrid multicloud approach in recent years, often for reasons related to saving money, avoiding vendor lock-in and gaining flexibility. But now, organizations are increasingly making the shift for another reason — to increase their use cases for artificial intelligence.
However, simply moving to hybrid multicloud isn’t enough. Businesses need the right infrastructure for scaling for future AI technology. To remain competitive, organizations are increasingly developing new AI-based apps, both customer and employee facing, to improve service, productivity and experience. Because new cloud-native applications, like today’s GenAI/GPT apps and workloads can run where it makes sense, organizations also need the ability to manage across the infrastructure in real time.
The Nutanix 2025 Enterprise Cloud Index found that 90% of organizations report having at least some of their applications containerized. As more businesses rapidly adopt new application workloads, such as GenAI, experts expect containerization to continue to increase. Interestingly, 94% of respondents say that adopting cloud-native applications/containers brought their organization benefits.
Rajiv Ramaswami, CEO of Nutanix, says that although the Kubernetes substrate needed for containers is readily available, apps also need data services, such as databases, messaging, streaming, caching and search. He explains that because these services are often siloed in the public cloud, organizations need a consistent set of data services across all systems.
“You can run it [artificial intelligence apps] on top of a Nutanix infrastructure, of course, wherever that's available,” says Ramanswami. “But we are also going to enable these services to be available natively on AWS, Azure and other native cloud substrates, so that you can truly then think about building an app once using this set of services. You could see how easy it is to be able to deploy that app anywhere and also not be locked in.”
A multicloud environment is when an organization uses multiple public, private or hybrid clouds. Multicloud provides organizations with a host of benefits that can improve workloads, data flow, and app creation.
Combined with a hybrid cloud architecture, multicloud has been the way forward for many businesses in recent years. With AI becoming intrinsically woven into more processes and apps, multicloud will continue to play a foundational role.
Many organizations create a hodge podge multicloud environment by selecting different clouds and providers without a cohesive plan. This often leads to an ineffective and costly environment that doesn’t fully meet the business’s needs. By instead creating a multicloud strategy that weighs all factors, including business goals, skills, workloads and budget, organizations can build a multicloud approach that serves as the foundation for their business’s future.
An effective multicloud strategy allows the organization to efficiently distribute workloads across multiple cloud environments while enabling seamless data flow to and from on-premises infrastructure or datacenters. This approach lets the business get the biggest bang for its bucks while mitigating risks associated with the individual cloud components of the architecture. A business-centric cloud strategy pushes forward organizational change in the face of resistance to technological evolution and speeds up the digital transformation process.
“AI-based services and applications are absolutely made for hybrid multicloud architectures,” says Induprakas Keri, senior vice president and general manager of hybrid multicloud at Nutanix.
“Steps in the AI workflow will happen across various infrastructure environments, with training happening in the cloud, enrichment, refinement and training in core datacenters, and inferencing at the edge. Successfully delivering a cohesive, scale-out infrastructure that can span across this entire AI workflow will be a key to success.”
Cloud-native development is picking up pace as organizations transform to an agile, integrated hybrid IT infrastructure. This enables them to take advantage of DevOps and build portable application stacks that are far more versatile than those that run on a single cloud. There are several advantages to having planned and effectively managed multicloud operations.
Avoid vendor lock-in: While cloud-native apps and services are currently portable to a fair extent, many providers attempt to make their platforms “sticky” by adding exclusive features to differentiate themselves from the competition. Multicloud deployments make sure that businesses get the architecture that is most suited to their needs and achieve a balance between portability and functionality.
Reduced risk of service disruption: A multicloud strategy diversifies essential services over different clouds. The organization can retain control over mission-critical apps and data with a private cloud or on-premises infrastructure that integrates with the multicloud implementation.
Cost savings: Multicloud gives a company the choice to bring together the right apps, platforms, and services that meet the business’s needs and functions without having to compromise on minimum requirements or standards. Organizations now have the option to move workloads that aren’t contributing significantly to revenue to another provider within a few hours.
The right cloud for the job: The wealth of features and customizations offered by every cloud provider for different workloads negates the disadvantages of commoditization. Some cloud environments are better suited to certain apps and business functions, especially in terms of AI workloads. Even within functions such as storage, an organization might choose to use multiple clouds for different levels of usage, performance needs and security.
IT teams would do well to anticipate the scope and potential challenges of designing a multicloud architecture that’s right for the organization as early as possible. Here are some considerations that need to be thought through before jumping in and adding more cloud environments to the existing architecture for AI adoption.
Versatile architecture: All AI applications, tools and databases under in-house development or being used need to support any cloud as opposed to a particular cloud. When organizations skimp on building support or integration capabilities into an application, they sacrifice workload flexibility down the line — when perhaps the real need for scaling the application arises. Since the number of cloud providers and capabilities will increase with time, it makes sense to keep applications in a constant state of flux and updated integrative capabilities. Doing so ensures that IT always has the leeway to choose the most suitable or cost-effective cloud environment for each of them.
The fundamental components of a multicloud architecture for AI can be divided into three layers:
Foundational resources: The underlying compute, storage, network, and security elements
Workload management: Constructs such as VMs and containers and workload lifecycle management frameworks such as Kubernetes or OpenStack
Service consumption: Applications that decouple IaaS, SaaS and PaaS by abstracting the underlying workloads and physical and virtual resources
Cloud providers and platforms are now bound by reliable and robust standards described and maintained by the Cloud Native Computing Foundation (CNCF). Proprietary technologies developed by major providers, such as Kubernetes, which originated at Google, are also evolving into open source and collaborative standards.
Optimized application stack: Amazon, Google, Microsoft and other major cloud platform vendors have end-to-end offerings and services, including orchestration, database and monitoring. However, they are barely general-purpose additions to the vendor’s core services and cannot be used for mission-critical functions involving big data such as large-scale financial transactions, fraud detection or machine learning. That said, IT teams can attempt to build and optimize enterprise applications to the same level of automation and simplicity of these integrated, built-in tools.
Compliance by region: The geographic location and movement of data fundamentally affects multicloud operations, especially for big businesses with big data. Organizations need to comply with country and/or regional-level data and privacy regulations, depending on their industry, but the responsibility is not shared by the cloud provider in most cases. There is little scope for modifying SLAs unless the market is sufficiently important for the vendors; therefore, the bulk of the liability for compliance rests with the organization itself.
Unified management: Multicloud enables the management of disparate components of the IT infrastructure as a single, cohesive set of resources, regardless of the physical location of those resources. It is all about operational functionality rather than technological infrastructure. Thus, multicloud is much more than multiple clouds. A multicloud environment is characterized by the unification and abstraction of private, public and hybrid clouds built on- and off-premises.
This “single pane of glass” management enables real-time orchestration and automatic provisioning of compute, storage, security and network resources while abstracting the complexity of the underlying application and database stacks.
Real-world multicloud deployment: Multicloud environments involve a wide spectrum of real-world applications. There are quite a few use cases or functional objectives that organizations might consider while evaluating multicloud solutions or forming a strategy.
Deploying GenAI: Organizations often struggle with deploying GenAI on-premises due to the flexibility and scalability needed for the high volume of computational resources. Multicloud makes it possible to create an environment customized to specific needs of each task and the strengths of each type of cloud. Because multiload handles diverse data sets better than other environments, organizations are able to more easily train data sets and models.
Increasing capacity in real time: Many enterprise applications run on-premises or in a private cloud, but when usage spikes they need additional capacity, which happens increasingly often with AI cases. During these times, they “burst” into a public cloud, which provides the extra compute or storage resources. This “cloud burst” works especially well for non-critical but performance-intensive applications or non-sensitive data.
This process lets the company avoid buying extra hardware that would just sit underutilized most of the time, while making sure local resources are always available for business-critical applications. As organizations continue to adopt more AI workloads, especially GenAI, the need for real-time capacity increases becomes even more critical for business success.
Development and testing: Another common scenario is when organizations run production applications on-premises but use the public cloud for development and testing. As businesses are developing AI applications and processes, this situation is becoming even more common.
These companies still need the multi-region capabilities and features of a public cloud, such as CDN, for the production environment. In other cases, an organization already using a public or hybrid cloud might rent out colocation facilities with virtualization options for large-scale testing rather than paying hourly rates to Amazon or Azure.
Running high-performance applications: Some applications are specifically built to run across multiple clouds, especially for resilience, performance and continuous delivery.
These applications can survive a DDOS attack or a regional or global failure of a single cloud service. Additionally, GenAI applications typically require high performance and bandwidth for successful deployment. Organizations using AI for automation also need high performance for the reliability and accuracy required for these tasks.
Disaster recovery: Multicloud environments add a new dimension to redundancy by maintaining an up-to-date copy of data or apps on a different cloud platform from a different provider. The multicloud strategy needs to define a Recovery Point Objective (RPO) that specifies the threshold of data loss or downtime that the workload can sustain, depending on the business function.
Cloud arbitrage: This is a pure cost-optimization model where workloads are dynamically shifted to the cloud system with the best pricing structure at any given time. Organizations that have skilled teams and tools will aim to move certain applications, or even parts of applications, to the most cost-effective cloud available, depending on the compute resources needed at the time.
For this, not only do they need to be aware of the differences in pricing and functionality, but also be able to test production performance in real time. For example, IT can measure the request-response time for the same application running in the same set of regions over different clouds over a preset duration.
Enterprises can consider using a cloud broker service to gather and analyze data and conduct testing, move apps between different cloud vendors, facilitate integrations among various cloud offerings, and improve relationships (read, fortify SLAs) with cloud providers.
Digital transformation, AI and data are driving the adoption of hybrid multicloud systems, which promise flexibility, scale and savings. Automation of multicloud workflows lets companies manage disparate workloads and integrate DevOps processes to accelerate innovation and bolster agility.
And yet, the cloud is not a magic formula for AI success. Organizations need to stick by these guiding principles and develop a strong and stable multicloud strategy to meet the ever-increasing operational demands of global business.
This is an updated version of the original article published on November 12, 2020.
Jennifer Goforth Gregory is a contributing writer. Finder her @byJenGregory.
Dipti Parmar wrote the original article. Follow her @dipTparmar or and on LinkedIn.
© 2025 Nutanix, Inc. All rights reserved. For additional information and important legal disclaimers, please go here.