How Distributed Computing Feeds the Sharing Economy

These scale computing and storage capabilities benefit the sharing economy, which relies on distributed cloud technology to thrive.

By Dipti Parmar

By Dipti Parmar March 13, 2020

Uber. Airbnb. Kickstarter. The “sharing economy" has moved from online commerce to the mainstream way of life in less than a decade. A myriad of socio-economic factors has transformed the way humans produce, distribute, trade, and consume.

Companies that operate in the sharing economy have different business models from traditional ones, which hire employees, manufacture or trade-in products, and sell them to consumers. These companies offer a platform or a marketplace on which suppliers and consumers buy and sell goods or services.

For businesses and consumers operating in the sharing economy, the most critical factor is on-demand "access" to assets (as opposed to ownership). Their ability to offer immediate access is possible mainly due to digital technology. The internet, peer-to-peer networks, mobile apps, and distributed data centers enable the billions of transactions that take place across many platforms every second. The continuous flow of information and direct feedback loops enable corporations, organizations, and individuals to make full use of product-related assets and resources at all times, maximizing their value.

Here’s an overview of the technology that powers this “new age” economy and the challenges for companies operating in this model.

Security: Burning Holes in Collaborative Effort

Trust is a critical factor in digitally-driven transactions. In the sharing economy, establishing trust is more difficult because the end service providers are nameless brands that are at best known by their ratings and reviews.

Players in the market need to work harder to earn recognition, goodwill, and brand awareness because of the lack of a dedicated party to assure quality or guarantee the value of the product or service. The involvement of major institutions such as government or banks is also limited in these areas, leading to further challenges in building customer trust.

This means that the services a company provides need to be bundled and marketed better, considering customers may not have hitherto considered them as viable solutions. They need to be personalized and packaged to meet individual requirements, keeping in mind benefits for the consumer such as on-time delivery, cost reduction, consistency in quality, reduced logistics costs and customization, all the while preserving the integrity and privacy of customer data.

Fortunately, technology is rapidly evolving to provide solutions to these challenges. Advances in distributed and decentralized network architectures are ensuring immutability and full traceability of data. They are also making transactions easier, quicker, and cheaper, bringing trust to the sharing economy ecosystem.

Scaling Access with Distributed Computing

The systems that power on-demand applications for delivering shared services are spread across multiple hosts and data centers. There is a constant stream of machine-generated big data. Add to that the complexity of "smart" devices, and there is a torrent of big data that can reveal insights into consumer behavior and drive intelligent decisions. Machine-to-machine (M2M) communication over IP networks constantly captures unstructured data from millions of sources to provide information and automation.

This leads to more sophisticated requirements for real-time data processing and decision-making, especially in applications such as driverless cars, avionics, and so on. This means the underlying infrastructure needs to be radically different from traditional systems, which are bound by tight access controls. A node-and-cluster system with nodes that contribute proportionally to the services offered by the cluster as they’re added is the need of the hour.

Nutanix’s Scale-Out Principle states that “Resources needed in any node should be proportional to the size of that node and not to the size of the cluster.”

The key to meeting this requirement is having an underlying “web-scale architecture” that scales exponentially faster than linear rates, without the need for restructuring due to bottlenecks in growth. The purpose-built virtualization technology and massive compute environments that allowed Google, Facebook, and Amazon to achieve this are now available to mainstream enterprises.

Web-scale is an architecture model for large distributed systems that run on standard commodity hardware, has a high level of tolerance with no single point of failure or bottleneck, takes a parallel approach to systems building and upgrades, and provides programmatic interfaces for control and automation via HTTP-based services.

Source: Webscale

With the increasing use of AI and machine learning by shared service providers, mission-critical distributed computing systems that drive autonomous and automated processes, as well as IoT applications, need to be even more reliable, resilient, and secure.

“Ideally, managing the distributed environment looks like a simple extension of your data center,” said Rajiv Mirani, CTO of cloud platforms at Nutanix. “You express intent of what you want done in a central way, and the system figures out how to distribute your intentions to the edge as needed.”

Source: Fusing the Edge and the Cloud in Distributed Computing

Redefining Storage

An important cog in the distributed computing wheel is metadata – it describes how big data is stored in the system, including the node, disk, and form in which it resides.

Scalable distributed file systems such as Nutanix’s Acropolis stores VM data in small chunks called “extents,” which can be compressed, de-duplicated, erasure-coded, snapshotted, or unstructured data bits. There is no single point of failure.

Source: Nutanix on Twitter

This works well with a hyperconverged infrastructure with micro-services like data, metadata, analytics, etc. running across all the nodes in the cluster. No node owns all the data, but they are managed through a single GUI interface that has little overhead but is responsive.

Again, this boils down to enterprise datacenters needing an overhaul when it comes to VM management. Virtual computing platforms with web-scale storage should scale-out linearly, have the ability to self-heal, always be available, and be able to handle prolonged workload bursts.

Source: Nutanix Bible

A Hybrid Cloud on a Hyperconverged Infrastructure

The talk of modern, agile and scalable datacenters brings us to a hyperconverged infrastructure (HCI), which enables the two key components of distributed computing, including:

  • Distributed ledger technology (of which blockchain is the most famous example) which doesn’t have any single or central administrator or point of data storage

  • Peer-to-peer networks which enable participants to query and access information from anywhere while minimizing computing and processing resources

Together, blockchain and peer-to-peer networks have the potential to power the sharing economy across different verticals and industries.

The success of Bitcoin has spurred major financial institutions to invest in FinTech startups, which in turn get access to a reliable, readymade customer base. This is significant because financial services are one of the fastest-growing industries that embrace hyperconvergence and enterprise cloud infrastructure.

The central components of a secure and reliable sharing economy solution are user experience, APIs, data sharing repository and core processing platform.

All of the above are supported and managed in a hyperconverged infrastructure by way of:

  • Enhanced control, security, and compliance

  • Better setup for development, testing, and deployment

  • Application delivery across multicloud environments

The Human-First Economy

The technology-enabled sharing economy will continue to transform an increasing number of day-to-day commercial activities. From book recommendations to medical records, any number of institutions and organizations can securely access and transfer data amongst themselves. The promise of instant transactions and collaborative consumption will be aided and facilitated by next-generation technology that's always on and failsafe.

Source: Pivotal

Andi Mann, chief technology advocate at Splunk, explained the importance of using cloud-native apps. "Taking advantage of cloud services means using agile and scalable components like containers to deliver discrete and reusable features that integrate in well-described ways, even across technology boundaries like multicloud, which allows delivery teams to iterate using repeatable automation and orchestration rapidly."

The technological considerations for a unified hybrid multicloud environment should be:

  • OS-level virtualization: A single OS instance is divided into isolated, dynamically created containers, each with a unique writable file system and resource quota so that underlying infrastructure dependencies are abstracted.

  • Updatability: This is one of the benefits of cloud-native apps, which are always available. In contrast, on-premises apps work on a subscription basis and need downtime when they’re being updated.

  • Flexibility: Custom-built applications and services must run on any private or public cloud with little modification, so that vendor lock-in is minimized.

  • Right-sized capacity: Infrastructure provisioning and configuration are optimized with dynamic allocation of resources. This means application lifecycles and workloads are better managed according to demand.

  • Collaboration: The ideal mix of cloud-native and virtualization facilitates improved DevOps, which means that people, processes, and tools are better utilized in operations to bring application code into production more quickly and efficiently.

Choose the Right Providers

It will be tempting to purchase and install the monitoring product from the largest provider with an existing relationship or the one that seems to have lots of features at a reasonable price. Here is a call to exercise caution. Only by matching the needs of the enterprise to the proposed product will the best fit for the organization be identified.

Pitfalls can include purchasing services that may not be fully utilized, accepting services that appear to cover most of the requirements, but may leave some unacceptable gaps, and products that can't be integrated – especially common with some on-premise, legacy applications. This last case could present a need for a second product or provider. There may not be a perfect answer to every scenario, but knowing the exceptions up front is critical to success.

There are several CMPs out there able to integrate on-premise, hybrid, and multicloud environments, each with their own services. Finding the one that best suits a given enterprise will depend on the requirements. Being mindful of the goal to simplify the operational model will require looking at several providers very closely.

Make sure to have direct conversations with the short-list of final provider candidates specifically about implementation and product support before final selection is advised. Often, this helps make the final decision more clearly visible.

Deploy and Normalize

Once your choices have been made, the unavoidable disruption during implementation must be managed. A phased approach is common practice while migrating to newer technologies. The internal team will partner closely with the provider’s team to define timelines and resources required for the migration.

Some environments will move quickly and rather seamlessly. Others may take a bit more work or custom integrations. It is important in the phased approach to allow learning, adjusting, and normalizing to happen along the way.

The post-implementation period will contain growth and present adjustments throughout the organization. Learning the big features and small nuances take time. Rules are often initially over-tooled. They require review and adjustment and fine-tuning them without limitations on access and functionality.

Some CMPs include AI and Machine Learning (ML) capabilities in their products. As these are integrated into the organization’s analytics, new patterns and opportunities will emerge.

The whole selection and migration process is ongoing and will take up more time initially, so it is best to start now. Hybrid and multicloud need no longer be synonymous with complexity. It’s time for organizations to claim the simplicity – or single pane of glass – promised by the integrated, converged cloud computing, networking, and storage model.

Featured Image: Pixabay

Dipti Parmar is a contributing writer. She has written for CIO.com, Entrepreneur, CMO.com and Inc. magazine. Follow her on Twitter @dipTparmar.

© 2020 Nutanix, Inc. All rights reserved.  For additional legal information, please go here.

Related Articles