Most cloud providers also charge egress fees for moving data off their platforms. These typically run between five and 10 cents per gigabyte, meaning that a 150 TB file transfer from one cloud to another can add up to more than $1,000 in egress fees alone.
Data transfer speeds can also impact flexibility. For example, moving 1 TB of data over a standard T1 connection at 80% network utilization takes more than 80 days on AWS.
On top of that, each cloud provider has a different set of tools and procedures for such basic tasks as provisioning and configuring servers.
“There can be additional costs related to moving schemas, metadata and repointing applications,” said Kevin Petrie, senior director of product marketing at Attunity, a maker of multicloud replication software. “There’s also administrative complexity in that you need to continue to monitor performance against SLAs.”
Like PCs and Macs
All this can add up to a lot of operational overhead. Autodesk has a staff of 80 IT professionals managing its AWS operations; to add a second cloud would probably require hiring 40 more, said Sam Ramji, Autodesk’s vice president of cloud platform, in an interview with SiliconAngle.
Ramji compared managing multiple clouds to supporting a mixed bag of Apple Macbooks and Windows PCs.
“They’re both computers, but the tools you need to manage them are quite different,” he said.
This doesn’t mean users should give up and cast their lots with a single cloud provider. The most practical approach right now is to allocate individual workloads according to the strengths of each platform, according to Attunity’s Petrie.
“Most companies we’re working with don’t try to spread an application across multiple environments but rather choose platforms based on their needs,” he said. For example, companies may keep their production data on-premises but replicate copies to the cloud provider with the strongest analytics tools.
[Related Product: Automate Cloud Security Compliance and Cost Optimization]
For those organizations that want to shift workloads, building applications on a cloud-independent platform-as-a-service such as Pivotal’s Cloud Foundry can minimize the need for customization at deployment time. So can working with one of the many emerging tools for multicloud management such as Nutanix Beam, which simplifies cost of governance and security compliance across multiple platforms.
Kubernetes to the Rescue?
Perhaps the most intriguing technology option on the horizon is Kubernetes, the orchestration manager for software containers, which are portable software environments that include applications and their dependent components. Nearly three-quarters of IT organizations are using containers in production today, and the rest are planning to do so in the future, according to the Cloud Native Computing Foundation.
Software built to run entirely within containers should run unchanged on any cloud or on-premises platform that supports them, at least in theory. Containers don’t solve the problem of data portability, but plenty of companies are attacking that issue.
Kubernetes “is a likely game-changer in the future in enabling the movement of data across infrastructures,” said Moor Insights’ Dillingham.
For most companies, the safest and most cost-effective strategy for the immediate future is to deploy workloads selectively to a limited number of cloud providers and leave them there, Dillingham said.
But users clearly want more flexibility than that, and with worldwide cloud spending expected to reach $210 billion in 2019, there is plenty of innovation lining up to give it to them.
Paul Gillin is the former editor-in-chief of Computerworld and founding editor of TechTarget. He’s the author of five books about social media and online communities. Find him on Twitter @pgillin.
© 2019 Nutanix, Inc. All rights reserved. For additional legal information, please go here.