We live in an era of convergence. From our cellphones to our cars, everything is being re-invented to do more than just one thing. Cellphones used to be for communicating, now they are used to browse the internet, listen to music, take pictures, make payments and even manage health. So why should protecting and managing a company’s data be any different? With convergence dominating the age of private and hybrid clouds, it shouldn’t.
Today, creating and keeping copies or snapshots solely for data protection is passé. Not only that, customers should not be forced to buy and deploy multiple separate platforms for their primary workloads and data management tasks.
A modern data management infrastructure should be converged with the application infrastructure and empower customers to leverage snapshots for dev/test, reporting, analytics and compliance – not just disaster recovery or backup/restore. So, what’s preventing convergence in cloud data management?
Space and Performance Inefficiencies Abound in Creating Copies
Traditional LUN-based approaches create major obstacles to realizing cloud-scale data management by imposing storage-centric space, performance and agility limitations. While some vendors offer VM-granular approaches, limiting the smallest abstraction to VM-level and offering only static policy-management limits the flexibility and efficiency for cloud data management use cases.
It’s an unfortunate fact that a majority of the traditional infrastructures deployed today allow you to create snapshots at a coarse-grained granularity of a LUN. Since a LUN typically holds 10s – 100s of VMs, you are targeting more objects than you want or some objects more frequently than you need. The result is space and performance inefficiencies.
Some providers have added virtual machine (VM) level snapshot, which are more fine-grained yet still do not go far enough in granularity. A single virtual machine could have several virtual disks attached and only a subset of those vDisks may have data actually worth targeting. Just like customers want to target specific objects within LUNs, they want to target specific virtual disks within a VM and avoid the full reconfiguration and downtime required when restoring an entire VM.
But there’s hope, because only Datrium offers a higher level of snapshot granularity, allowing customers to take a snapshot at an individual datastore-file level. Any file that’s stored on the Datrium DVX data store can be independently snapped. This can include files such as virtual disks, VM templates, ISOs, OVAs and any other file that you keep on your data store outside of vmdks.
In addition, snapshots are instantaneous, fully space efficient and have no impact on the original object. This enables DVX customers to create as many snapshots as they need without worrying about space and performance overheads.
Too Many Hurdles and Complexity in Leveraging Copies
Like they say, garbage in, garbage out. This applies to traditional systems. Since a LUN is the smallest unit of snapshot, it also becomes the smallest unit for a restore or clone even when just a single VM or file is needed. This exacerbates space, performance and process inefficiencies – the LUN containing the object has to be discovered, the LUN snap has to be mounted, and the desired object needs to be discovered. And while there are add-on tools and even entire platforms that can help with this, cobbling together and using multiple platforms has its own complexity.
Furthermore, just like a snapshot, customers also want to restore and clone just their data (sub-VM level) versus the entire virtual machine, which VM-level approaches don’t address.
But again, there’s good news because unlike traditional converged infrastructure vendors limited by array and LUN technology, Datrium enables restores and clones at an individual datastore-file level. These are operations are instantaneous, fully space efficient and have virtually no performance overhead on the production application. This enables more flexibility in both backup/restore or dev/test use cases.
Let’s look at two examples where this adds value:
vDisk-level backup/restore – If an in-guest file gets deleted from a VM, one can simply clone the affected virtual disk of the VM from a point-in-time snapshot and attach it back to the original VM for an intra-VM transfer. In a VM-granular system, this would require either restoring an entire VM (requires powering off the original VM affecting up-time) or cloning an entire VM (affects the overall environment and requires new IPs, software licenses, etc.) from a point-in-time snapshot.
vDisk-level clones – In a virtualized database set up, one can simply clone the virtual disks containing the data and attach it to another VM for dev/test, analytics or compliance checks. Such an approach eliminates the need to re-configure the variety of settings necessary with a new DB VM—a great convenience for your developer.
In virtualized private cloud environments, it is common to have 1000s of virtual machines that need to be snapshotted or restored/cloned. Traditional LUN approaches make targeted protection of a subset of VMs very complex to set-up and manage. Customers must create static VM(s): LUN(s) mapping in order to avoid impacting multiple VMs when a LUN-wide snapshot occurs. Every time new VMs are added to the mix, this exercise has to be carried out again. This simply doesn’t scale like a cloud-scale system should. While some systems provide VM-level targeting, policy application is either by individual VM, an entire datastore or static pre-defined groups that do not evolve as new VMs are added to the environment.
Datrium to the rescue, yet again with its dynamic policy construct called a “Protection Group (PG)”, which allows customers to group any VM or Files across the entire DVX system with simple wildcard inputs. Once grouped, policy definitions such as frequency of snapshotting, retention and Datrium elastic replication can then be applied. At that point, objects are automatically snapped & replicated. A new VM matching the wildcard criteria is automatically bound to existing policy settings, requiring no manual intervention. Hereafter, snapshots can be quickly discovered using a search-based mechanism in Snapstore catalog and restore/clone operations can be performed on the snaps at any granularity: Protection Group-level, VM-level or file-level at the source site or the target site. The task of replication is also highly efficient since it can now transfer data based on the defined granularity from host-flash and leverage resources across all hosts in the DVX cluster eliminating central bottlenecks.
You Can Now Converge Cloud Data Management
Datrium Data Cloud has gone beyond VM-granular systems with the addition of datastore-file level granularity, dynamic policies, and a server-powered elastic replication solution. It is all built on a high performance, efficient and resilient DVX core, which frees virtualization administrators to use powerful data management capabilities without crippling trade-offs.
With Datrium Data Cloud, an integrated solution for primary storage and cloud-scale data management is now a reality.