How HCI Integrated Solutions Help Business Application Deployment

Hyperconverged infrastructure empowers developers to build cloud apps for complex, hybrid deployments by simulating live environments.

By Dipti Parmar

By Dipti Parmar December 29, 2020

Every enterprise business today is information-driven. The result is that IT systems have taken on a huge, business-critical role in everyday operations, with huge amounts of data being passed around, an increasing amount of compute, networking, and storage resources being used, and more specialized applications and software driving these workloads.

Most organizations have implemented a multicloud strategy in order to run their growing stacks of functional software and services, and have benefited from agility and scalability and realized cost savings in the process.

And yet, for the enterprise there continue to be loose ends in application development that no single (or integrated) cloud solution can tie up. These include interoperability and portability between multiple architectures or vendors, constraints on performance and scalability, data movement between different cloud environments, tight coupling of application layers and development interfaces with vendor-specific cloud technology, limited ability to run queries for complex event processing, and client-side security concerns around services that run in browsers.


Hyperconvergence 2.0: Next Generation HCI Powers Shift to Hybrid Cloud Infrastructure

Nor can an on-premises datacenter provide all the answers. They pose a different set of challenges for application deployment, including legacy or outdated platforms for development and testing, inefficient resource provisioning, lack of visibility into data, and limited workload mobility.

The question arises then, if there is an ideal IT infrastructure or environment – or even a middle ground – that empowers developers to build and test robust applications and speeds up and smoothens the deployment process considerably?

Enter hyperconverged infrastructure (HCI).

Hyperconvergence is the amalgamation of all the IT systems and deployments of an organization – including compute, networking, and storage resources as well as server systems that run various applications, databases, services, and platforms – into a unified system for better operational efficiency.

In fact, the State of the Enterprise Datacenter report found that HCI significantly improves application performance as well as data efficiency in more than 20% deployments within a short period of time.

The salient features of an HCI enable the development and deployment of more resilient, high-performing applications that ensure continuous delivery coupled with faster transaction processing at scale.

Here's a 360° view of how that happens.

What Applications Run on Hyperconverged Infrastructure?

Hyperconvergence evolved with the aim of running more workloads on less physical resources with the help of better provisioning. As more companies continue to move towards hyperconvergence and resources become more efficient and standardized, the number of production and datacenter workloads shifting to HCI keeps increasing and so does the corresponding number of applications built for it.

Most mainstream applications including ERP systems, databases, hypervisors, and instant messengers in use today run transparently (and perform flawlessly) on hyperconverged infrastructure. These can be categorized into:

  • Business-critical apps such as Oracle, SAP Business Suite, Microsoft SQL Server, Microsoft Dynamics, and IBM DB2
  • Messaging and collaboration apps such as Microsoft Exchange, Microsoft SharePoint, Microsoft Skype for Business, Cisco UC, and Avaya Aura
  • Server virtualization and private cloud platforms such as VMware ESXi, Microsoft Hyper-V, and Nutanix AHV
  • Big data and cloud-native apps such as Splunk, MongoDB, and Elastic
  • Virtual Desktop Infrastructure (VDI) platforms
  • Remote Office and Branch Office (ROBO) enabler apps
  • Development and Testing app suites such as Puppet, Docker, and Chef

The ones that matter most – business-critical applications in the enterprise – are the ones that stand to perform best on HCI. In fact, Gartner estimates that 20% of business-critical applications currently deployed on three-tier architecture will be moved to HCI by the turn of the year. They even defined hyperconverged infrastructure as a separate market segment that allows for “software-only” or “as-a-service” delivery models on on-premises or cloud infrastructure.

But where do organizations looking to develop and run business-critical Tier 1 applications start?

Dedicated Application Support – The Transition from Virtualization

Virtualization was supposed to make life easier for IT infrastructure administrators. Once a VDI environment is implemented, admins are continually expected (and frequently impelled) to add more and more applications to support increasing numbers of users and devices – without the necessary budget, support staff, or optimal infrastructure upgrades. They’re expected to be able to bring up new applications on new VMs or containers and ensure that these never go down.

Fortunately, server virtualization in an HCI enables IT admins to increase capacity in real-time (and save resources using thin provisioning), pool and unify compute and storage resources to enable better application performance, and even “burst” overflow traffic into a public cloud seamlessly to prevent any interruption in services.

That doesn’t mean every company needs to tear down their data centers or bid adieu to their private or public cloud environments and switch to HCI. However, even a single mission-critical application might place exceeding demands on current resources, and have a unique performance or I/O profile that might bring legacy infrastructure to its knees at times.

For example, a core IT function that consists just of file storage and access might place stress on the existing architecture. Other environments might handle huge amounts of file server processes without a glitch, but a database such as SQL Server would be going too far. This is because the I/O profiles of a given application varies across deployments and systems.

Hyperconvergence can ensure the largest and most resource-intensive Tier 1 applications have enough compute power to keep running their workloads, by taking advantage of features such as hardware acceleration and deduplication (which is handled by a separate hardware card instead of the processor). These are especially effective with I/O-hungry applications such as Hadoop, Splunk, Microsoft SharePoint, and others.

Further, as the criticality of certain applications increases, the need for high availability rises correspondingly. This is handled neatly by server virtualization as part of an HCI.

It only gets better from there. When the application needs more storage, CPU, or RAM, each resource can be added disproportionately or independently of one another, or even skip one altogether. This makes scaling straightforward and customized to the needs of each application. All of this can be managed via a single administrative console, making it simple for admins to configure and manage project-specific environments for application development.

“We no longer have to plan ahead to deal with growth in unstructured data or the rollout of new applications,” said Daniel Davis, Infrastructure Manager at Endemol Shine UK, a global media producer and distributor. Endemol moved their distributed legacy infrastructure with blade servers and fiber-channel SAN storage platforms – which were ill-equipped for their digital applications with unstructured data demands that spiked without warning – to Nutanix Enterprise Cloud and gained the ability to size workloads on the fly.

Brian McDermott, Senior Network Administrator at Waste Pro USA, had a similar story to share. “With a hyperconverged infrastructure, it only takes minutes to add nodes. Whenever we have new business unit needs we can react quickly. The easy scalability enables us to bring new applications and services to market much faster, and provide better service to all of our customers.”

The real customers that these admins refer to and service, however, are developers.

The Developer as a Customer – and a User

An IT organization – especially in the enterprise – might be well equipped with HCI and have DevOps in place, but that doesn’t mean hyperconvergence is necessarily the driver for application development and deployment.

Why? Because developers – the builders of applications – couldn’t care less about the underlying infrastructure and couldn’t probably tell HCI from a private cloud. Madhukar Kumar, Vice President of Product Marketing, Cloud Services at Nutanix, likened the developer to a racecar driver. 

“I’m not interested in how the piston of the engine works – my craft is to make the car run at 200 miles an hour on an uncertain surface,” emphasized Kumar.

What developers care about is the resources they need – VMs, containers, storage (volume based or object based) and highly available development and test environments. This is where the admins come to the rescue – HCI enables them to quickly set up virtualized production environments and allocate VMs to different teams from engineering, marketing, and other departments on demand.

In the absence of such flexibility and instant provisioning, the threat of shadow IT looms large. Today, developers (or even staff from non-IT departments) can whip out a credit card and quickly buy a few cores of processing power, a few gigs of memory, and storage space from AWS for testing an application or third-party service that central IT doesn’t even know about! If IT is seen as an obstacle instead of a resource, determined professionals might try to put together their own solutions, with devastating outcomes.

Instead, if IT admins could use the existing HCI to give dev teams the same agility, same ease of use, and same experience as the public cloud, and enable them to set up and scale a test environment in a few hours, monitor and tweak it as needed during the testing process, and then destroy it when the changes or module goes live, that would truly empower them to do their job better as well as further the cause of DevOps in the organization.

How? In most organizations, production environments get most of the resources, including servers, storage, and bandwidth, as well as support. On the other hand, test and development (test/dev) environments are left with hand-me-down environments with legacy hardware. This results in a lot of problems that hamper the quality of application development and live deployment:

  • Slower test/dev infrastructure robs developers of their time and delays delivery. The more their work is hindered, the less efficient they become. It simply increases time to market for new features, products, or services.
  • When a critical part of a test/dev environment fails, all work is stopped. With no availability guarantees, there is no certainty in application delivery.
  • Significant variations between the test/dev and production environments leads to unpredictability and uncertainties in how the application will run when it’s deployed live.
  • Even if the application performs well in an underpowered test/dev environment (and goes on to do well in production) it only means production resources are underutilized. Plus, the two different sets of hardware simply translate to redundancy.
  • If the test/dev environment doesn’t have the same security policies as the production environment, it could leave vulnerabilities in the application even after live deployment.

HCI addresses all these challenges in the best way possible – building a separate test environment within the same physical infrastructure with the same resources and capabilities as the live production environment, with the option of scaling up either on demand. This has the additional benefit of replicating the production to test/dev and vice versa, protecting the former in the process. Even if a complete replica isn’t needed, adding additional nodes to the production environment gives it the benefits of deduplication.

All in all, it makes every stage of the Application Lifecycle Management (ALM) shorter for developers.

More Power to Application Developers

Manosiz Bhattacharyya, Senior VP of Engineering at Nutanix, redefined HCI as Hybrid Cloud Infrastructure because it balances the capabilities of both the private and public clouds. 

“Part of it is having the ability for enterprise applications to run in a public cloud. Easily managing and running cloud native applications on-premises is the other side,” Bhattacharyya said in an interview with The Forecast.

Bhattacharyya believes that HCI unifies the IT ops admins and the development team by decentralizing the management plane. 

“HCI enables role-based access controls for development teams and builds self-service and chargeback into the management framework. IT Ops can safely offload creation and configuration of VMs to a developer.”

“So I can write my own code and say go create a virtual machine, install an operating system on it on top of that, install Python or Java on top of that, and install my code on top of that,” 

Kumar concurs. This is an emerging process called infrastructure as code that passes the baton of IT infrastructure configuration from administrators to developers.

Clearly, HCI’s ability to clone production and integration environments drastically reduces the time taken to deploy applications. Well-defined processes will help businesses turn their development environments into significant assets.

Dipti Parmar is a marketing consultant and contributing writer to Nutanix. She writes columns on major tech and business publications such as IDG’s, Adobe’s, Entrepreneur Mag, and Inc. Follow her on Twitter @dipTparmar and connect with her on LinkedIn.

© 2020 Nutanix, Inc. All rights reserved. For additional legal information, please go here.