Have you ever stopped to imagine how much work goes into product placement on grocery store or supermarket shelves? It has to be just right in order to catch your eye. And placing products is only part of the puzzle—expansion of products is another. With new products continually being produced, there are thousands of combinations possible on shelves.
The digital age has really disrupted the expensive manual process of loading different product combinations and carting in focus groups to view “mock” shelving. Procter & Gamble, for example, has taken a different approach in removing manual processes by using virtual shopping shelf technology. Application developers can easily program these virtual shelves to be manipulated and render new combinations in minutes, relying on a massive amount of data and the ability to quickly analyze it. This provides insight to test and develop thousands of different combinations without actually moving a single product. At the core, Procter & Gamble has moved from manual procedures to automated ones, with the key drivers of saving time and reducing costs.
Tackling Ongoing Data Growth
We all know that data is growing at exorbitant rates, especially when you look at digital applications like the “virtual shopping shelf,” but it’s especially telling when you look at the projections. IDC predicts that data growth will hit 40 zettabytes by 2020 with unstructured data accounting for 63% of it.* That’s a massive amount of unstructured data, and enterprises currently store it in complex, difficult-to-manage, limited-scale silos. Additionally, some enterprises also choose file structures for storing large amounts of data, which is not always the best solution for application developers. These folks look for simple constructs to put and get massive amounts of data without worrying about directory structures or paths. The public cloud has helped in solving some of these challenges with elasticity and ease of use, but it lacks the security, control, and cost efficiencies for predictably running large-scale environments. This leads to inefficiencies and reduces business agility. Enterprises need a new paradigm to consume and manage data growth (terabytes in size) and in some cases doing so while maintaining regulatory compliance.
Introducing Nutanix Objects™
As customers have continued their Nutanix adoption, they are challenged with handling unanticipated data growth, all the while trying to make sense of it. Nutanix has taken the problem head on and will offer a new object-based storage service called Nutanix Objects™ that will be available in the Enterprise Cloud OS as part of the one-click deployment process.
In the future you will have the ability run VM, file, block and object services in a single OS. This new service has been built for the multi-cloud era and is a global converged object-based storage with infinite scale. Application developers will be able to consume S3-compatible API storage with great performance as their needs require.
Object-based Storage 101
If you haven’t used object storage in the past it might seem confusing. But in reality it’s really simple. Let’s break it down:
- Object-based storage is a bit different than the normal block or file system storage. It utilizes a flat list of objects, instead of a directory structure found in traditional file systems, stored in “objects.” The objects are stored using unique IDs rather than filenames. This drastically reduces the overhead required to store data as the amount of metadata required is much lower.
- Additionally, objects are stored with the metadata, making it highly scalable. Objects can be terabytes or even a few kilobytes in size and a single container can hold billions of objects. Application developers can easily access the objects using simple S3-compatible API calls through “GET” and “PUT” actions, without worrying about complex directory structures.
The added benefit of using the Nutanix Enterprise Cloud OS, is that you inherit all core data path efficiencies trusted by thousands of customers – compression, deduplication, erasure coding – and much more. It drastically reduces the time to buy, build, manage, and deploy complex siloed infrastructure.
Nutanix Objects for a Multi-Cloud World
In the age of applications spanning the public, private, and distributed clouds, we at Nutanix have a unique opportunity to design an object storage solution that spans all three clouds.
The key characteristics of such a solution are:
- Global Namespace (a single namespace across all clouds)
- Infinite Scale (unencumbered by architectural limits of the past)
- One-click Simplicity (intent-based design or opinionated design)
Global Namespace: A true global namespace should have a multi-cloud mindset as the focal point, which is why the Nutanix solution utilizes S3 APIs for Objects™. It will provide a single namespace and storage fabric across Nutanix clusters and public clouds. Additionally, applications that write object data to a Nutanix cluster will be able replicate/tier across clouds. This will enable developers to write applications to a single namespace without having to rewrite them in case applications were to move across cloud boundaries.
Infinite Scale: This solution easily scales and reduces upfront costs to minimize unused resources. When the Nutanix solution is projected to run out of compute or storage resources, simply add more compute/capacity non-disruptively to scale the cluster and redistribute VM resources. This drastically reduces the barriers between public and private clouds.
One-click Simplicity: In order to span multiple clouds, management must be seamless and easy. The only way to scale and handle large amounts of unstructured data is to deploy Objects using a single-click operations for application developers to quickly consume. The Nutanix Enterprise Cloud OS not only simplifies operations it uses machine intelligence and automation to reduce many clicks to one click via Prism. It’s been enabled to learn from large volumes of system data to automate common tasks and generate actionable insights for optimizing virtualization, multi-cloud management, and day-to-day tasks.
Where is Object Storage Relevant?
There are several ways to effectively utilize an object-based storage solution with a range of workloads, such as big data analytics, data warehousing applications, large-scale IoT sensor data and more. Let’s focus on one of my favorites. Imagine a scenario where law enforcement agents need access to surveillance footage in order to solve a crime. At a high-level they would need cameras to constantly record surveillance data and periodically pull it up in real-time or run analysis on it to find certain individuals. They might even require going through footage that dates back days or even months. A system like this would need a non-disruptive scalable backend. Depending on the number of cameras and quality of the images captured, these cameras could generate terabytes of data in a matter of days. London, for example, has about 422,000 CCTV cameras.* If you take an average of about 20 frames per second with a good resolution, you could easily store about a half a petabyte of data in less than a month.
In a typical surveillance environment, security footage data is captured using input devices (i.e. security cameras), stored for real-time access, and then post-processed for facial recognition. This sounds quite complicated, but if you break it down at the architecture level and utilize a flexible solution that easily scales, it’s not so convoluted.
- Cameras feed information to an application running on potential Linux clients
- Data is stored in an encrypted object storage bucket(s), which stays there for about a week’s time
- Post-processing applications (big data analytics) can easily crawl the single namespace and use this data to run facial recognition capabilities. Eventually, portions of data would be moved to another bucket for longer periods of storage.
- Streaming applications are able to load either real-time information or post-processed data and quickly help law enforcement agents
This is a great analogous scenario for the Nutanix object-based storage solution not only for providing scale and cost efficiency, but also enabling the portability of the data. If developers have a global namespace that straddles public and private datacenters, they could start in the public cloud when the environment is small and unpredictable using Nutanix Calm. And as they continue to scale and learn the environment, they can move that application internally and utilize the same data storage location with the same S3-compatible API interface. There wouldn’t be a need to rewrite the application.
That’s just one example, but there are many different scenarios across industries where object-based storage can be effectively utilized, for example, hospitals that want to store vendor neutral archives (X-Ray, Meditech, Epic applications (PACS), or media companies that need to store many large image files.
Next time you are walking around a supermarket and see a shopping shelf or notice surveillance cameras on a street corner, think about what happens to the data and how much has to be stored to drive intelligent insights.
Nutanix Objects is currently under development. Pricing details will be announced closer to the applicable release dates.
*Source: IDC File- and Object-based Storage Forecast, 2016-2020
Forward-Looking Statements Disclaimer
This blog includes forward-looking statements, including but not limited to statements concerning our plans and expectations relating to product features and technology that are under development or in process and capabilities of such product features and technology, our plans to introduce product features in future releases, product performance, competitive position and potential market opportunities. These forward-looking statements are not historical facts, and instead are based on our current expectations, estimates, opinions and beliefs. The accuracy of such forward-looking statements depends upon future events, and involves risks, uncertainties and other factors beyond our control that may cause these statements to be inaccurate and cause our actual results, performance or achievements to differ materially and adversely from those anticipated or implied by such statements, including, among others: failure to develop, or unexpected difficulties or delays in developing, new product features or technology on a timely or cost-effective basis; delays in or lack of customer or market acceptance of our new product features or technology; the introduction, or acceleration of adoption of, competing solutions, including public cloud infrastructure; a shift in industry or competitive dynamics or customer demand; and other risks detailed in our Form 10-K for the fiscal year ended July 31, 2017, filed with the Securities and Exchange Commission. These forward-looking statements speak only as of the date of this presentation and, except as required by law, we assume no obligation to update forward-looking statements to reflect actual results or subsequent events or circumstances.
© 2017 Nutanix, Inc. All rights reserved. Nutanix, AHV, Acropolis Compute Cloud, and the Nutanix logo are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. All other brand names mentioned herein are for identification purposes only and may be the trademarks of their respective holder(s).