The decision’s been made, you’re switching to the Nutanix Objects Storage platform for your object storage needs. Fantastic choice! Now all you need to do is migrate your data over to the new platform. Whether the scenario is migrating from another on-prem object storage vendor or cloud repatriation, the question is the same: how do I get my data from A to B?
Moving your data from one object storage vendor (or the cloud) to another can be daunting, which is why Nutanix built the Objects Replicator (OR) migration tool. It’s of course important to be aware that there are a number of object data migration products on the market, but this blog focuses exclusively on OR, which Nutanix provides to its customers at no cost. Getting right to it, OR is a tool you can point at your existing S3 API compatible object store and have it migrate the data across to your Nutanix Objects Storage deployment. The OR tool is deployed as a virtual appliance, and works at the client level, issuing S3 API calls to copy the data between source and destination. This means it can work with any object store that supports the S3 API standard. The current blog explains how the OR tool can be used to successfully migrate your existing object data to Nutanix Objects Storage.
Let’s use a simple example to explain the workings of OR. We have a source bucket aptly called “migration-src”, which resides on an object store supplied by another vendor and contains 99 objects. This is the bucket we will be migrating data from.
Our destination system is a Nutanix Objects Storage deployment (“objstore4.tme.local”) and the bucket we wish to migrate the data to is “migration-dst” (once again, the clue’s in the name). Since this is a Nutanix object store, we can conveniently use the inbuilt browser-based S3 client, known as Objects Browser, to view the destination bucket’s contents.
The destination bucket is currently empty except for one pre-existing object. Take note of that because we will be configuring the OR tool to remove “stale” objects, which means any objects in the destination bucket that weren’t put there by the current replication process will be deleted.
Let’s bring Objects Replicator into the picture at this point. The tool is available to Nutanix customers on Limited Availability release, meaning if you wish to use it just contact your Nutanix account team and they’ll make it accessible. There’s a user guide for it too, available here. While this blog will give you a good feel for the tool’s use and capabilities, you should consult the user guide before undertaking a migration – and if you have questions you can reach out to the Nutanix support team.
The OR tool comes as both an ISO and a qcow2 image. If you’re using the Nutanix AHV hypervisor, the qcow2 image is the quickest way to get up and running. Create a VM with 8vCPUs, 8GB RAM, attach an additional 100GB home vdisk (in addition to the provided qcow2 vdisk) and you’re good to go. The OR VM can be placed anywhere you choose, including directly on the Nutanix Objects Storage cluster you’re migrating to - this takes advantage of the one-VM-permitted-per-NUS-node rule. If you do choose to run OR on the cluster, its close proximity to the destination object store will help write (PutObject) performance. When the VM is up and running, ssh to it as the “nutanix” user and you’ll see the following:
Next, we need to point the OR tool at our migration source and destination buckets – this is done by editing a simple configuration json file. A template file is provided to help with this, located in the nutanix user’s home directory. In this file you specify the source-to-destination migration mappings. Of course, multiple mappings can be entered and these are executed sequentially. As well as defining the various source and destination buckets and object store FQDNs, you can stipulate the replication interval and whether pre-existing or “stale” objects in the destination bucket (those that weren’t replicated there by the current migration action) should be removed – this is disabled by default.
A 24-hour replication interval is recommended but really it depends how much data needs to be replicated overall. The interval can be expressed in seconds, minutes or hours. Various topologies are supported including 1-to-1 (a straightforward migration), 1-to-many (distribution scenario) and many-to-1 (consolidation scenario). If using many-to-1 it’s important that the different sources do not contain objects with identical names, since only one of these can end up as the live object at the destination.
Once all the source-to-destination mappings have been created you’re almost ready to rock, but before kicking off the migration it’s advisable to first configure alerting for the OR tool in the Prism Central console. This useful integration, set up by running a simple script, helps keep you informed about the progress of the migration, including whether any problems have arisen. Reassuringly, if OR crashes or the script fails for any other reason, a critical alert is generated, and the Objects Replicator task is restarted by the monitor script.
Ok, now we're ready to perform the migration! A quick couple of things to note before we do that. Firstly, when OR runs, if any destination bucket named in the json file does not exist OR will go ahead and create it for you. While that can be convenient, if there are lifecycle policies on any of the source buckets that you wish to recreate on their destination counterparts, the destination buckets should be created in advance of the data being migrated, and not left to OR to create. The source bucket lifecycle policies can then be exported using the GetBucketLifecycle S3 API and subsequently applied to the corresponding Nutanix buckets using the PutBucketLifecycle S3 API. The reason this all needs to be done up front is because lifecycle policies are not retroactive, i.e., they are only effective on data entering the bucket after the lifecycle policy was created. Note that lifecycle policies on the destination buckets should also be set to expire incomplete multipart uploads. The second point is that OR migrates only the data payload, it cannot migrate object metadata or tags. Also, if any of the source buckets are versioned it will only migrate the current object version and none of the past versions. Nor does it migrate bucket properties, though we'll discuss how that can be dealt with later on (hint: it's similar to how lifecycle policies are handled, described above). Kicking the migration off is as simple as issuing a command that points to the json file you populated, and away we go!
Before we move onto monitoring, a quick word on dealing with migrations at scale. If you have a large number of buckets, each containing a large number of objects, you can deploy multiple OR VMs and have each one drive the migration activity between different source/destination bucket pairs. As long as the different OR instances aren’t trying to write to the same target bucket this approach works well and introduces parallelization for faster completion of the overall migration.
In terms of monitoring, I’ve mentioned how Objects Replicator alerts are configurable in Prism Central, but it’s also possible to monitor the activity in real-time by inspecting the log output created by the Objects Replicator VM. This yields information such as copied object count, transfer rate, % of data transferred, % yet to be transferred, and errors (if any) – see the output example below. In the same output you can also see that a file called “stale_data.txt” was removed from the destination Nutanix bucket, as this option was set to “true” in the configuration file for this particular mapping.
The replication process runs repeatedly based on the schedule defined in the configuration file and will carry on doing so until you issue the stop command. By running in iterative cycles like this, objects newly uploaded to the source bucket will be detected and migrated on subsequent runs, effectively keeping the source and destination buckets synchronised. In the next stage of our example, 30 objects were deleted from the source bucket between runs, leaving 69 remaining.
Object Replicator detects the discrepancy between source and destination and performs a comparison to see which objects have changed/been removed – it uses a combination of object size and md5sum comparisons to do this. In the first two lines of the below output the discrepancy is evident. Subsequent lines show the deletes being executed on the destination bucket.
Of course, at some point clients (end users, applications etc.) will need to stop writing to the source bucket(s) so there are no more updates needing to be replicated. At that time, when all clients have disconnected and the logs show no further changes on the source buckets, you’re ready to cut over to the Nutanix Objects Storage system. Before that happens, you’ll likely want to update the destination bucket properties, so they match those of the source buckets. I’m referring here to things like access permissions, lifecycle rules and bucket tags, as these are not inherently applied to the destination bucket by the Objects Replicator tool.
It makes sense to create user accounts in Nutanix Objects Storage with names identical to those used in the source object store. If the users in question are Active Directory/OpenLDAP members, IAM keys can be bulk generated for all members of a group at the same time. Keys can be also generated for users individually, if required, based on email address… either way don’t forget to distribute the new keys to the users/app owners.
Next, you’ll need to ensure the same permissions that exist on the source buckets are applied on the destination counterpart buckets. Nutanix Objects Storage supports Amazon’s policy document json format, so if the source also supports this you could export the bucket access policies using the GetBucketPolicy S3 API and apply the json output to the corresponding Nutanix buckets (PutBucketPolicy S3 API). A few minimal changes may be needed first but if the destination buckets have been given the same names as the source buckets and the user principal names remain the same as on the source system the changes will be minimal.
Another consideration is bucket tags. If any of the source buckets have tags applied to them, these can be exported using the GetBucketTagging S3 API and then applied to the corresponding Nutanix buckets using the PutBucketLifecycle S3 API.
With the above elements in place, and once all the data has been migrated, the cutover can begin. To ease this, you could apply the source object store’s FQDN to the Nutanix object store as a secondary FQDN and then simply update DNS to associate that FQDN to the Nutanix object store’s load balancer IPs. Client requests are now directed to and served by the Nutanix object store – the migration is complete!
Conclusion
The Objects Replicator tool provides a simple way for Nutanix customers to easily and efficiently migrate object data from their legacy or cloud object storage platform to Nutanix Objects Storage. Alongside the Nutanix Move product’s file migration capabilities for Nutanix Files Storage, the Objects Replicator tool provides yet another reason to consider the Nutanix Unified Storage solution for your unstructured data storage needs.
© 2025 Nutanix, Inc. All rights reserved. Nutanix, the Nutanix logo and all Nutanix product, feature and service names mentioned herein are registered trademarks or trademarks of Nutanix, Inc. in the United States and other countries. Other brand names mentioned herein are for identification purposes only and may be the trademarks of their respective holder(s). This post may contain links to external websites that are not part of Nutanix.com. Nutanix does not control these sites and disclaims all responsibility for the content or accuracy of any external site. Our decision to link to an external site should not be considered an endorsement of any content on such a site. Certain information contained in this post may relate to or be based on studies, publications, surveys and other data obtained from third-party sources and our own internal estimates and research. While we believe these third-party studies, publications, surveys and other data are reliable as of the date of this post, they have not independently verified, and we make no representation as to the adequacy, fairness, accuracy, or completeness of any information obtained from third-party sources.