Chickens (and all birds) are thought to have evolved from dinosaur relatives. One team of researchers plans to use chicken embryos to bring back dinosaur traits in a living creature. Their aim is to activate dormant traits by injecting modified genes carried by a virus to alter chicken embryos. The process for turning back the clock on chicken evolution includes: reversing the genes responsible for claws to fuse and become wings, jaws and teeth to become dormant in favor of beak formation, and for tail growth to stop and so on.
Re-activating dormant genes is great for studying the ancient past but what does it have to do with future-facing storage requirements?
Well, the storage landscape looks drastically different than it did just a few short years ago. Flash has taken hold, monolithic storage arrays are on the decline, and virtualization is everywhere. Admins are tasked with managing 10s of 1000s of VMs. Infrastructure needs to be scaled easily in every dimension you can think of, and no one wants to spend time managing infrastructure anymore. Array vendors, whether the 30-year old behemoths or the “modern” 8-year old ones all have to cope with these changes. Let’s inspect one aspect, data protection for instance, on how array vendors have coped thus far.
Yesterday’s storage solution
With traditional storage arrays, the array exports a set of LUNs. You then create VMFS datastores on LUNs, and then create VMs on these datastores. The resulting IO blender effect is largely mitigated (but not entirely solved) by SSDs in the arrays. However, data protection is still problematic. It is not possible to snapshot and replicate individual VMs, so the admin must snapshot/replicate whole LUNs. When you want to restore a VM, you first need to figure out which LUN has the VM you care about. If you have moved VMs between LUNs for capacity or performance balancing, just locating the correct LUN can be painful. After that, you need to recover the whole LUN, which is costly from a time and capacity perspective. Then at last you can recover the actual VM.
To avoid the pain of this workflow, admins have relied on external backup systems, a separate wad of technology that is poorly integrated with array features. Beyond VM-granular management, backup systems also offer the ability to backup and recover fine-grained objects like VMDKs or ISO files. In general, these backup systems provide data management features that are more aligned to your business applications— creation and management of policies, applying policies to VMs, scheduling snapshots and replication. Most importantly, backup systems come with a searchable backup catalog to handle the scale of managed objects. None of this functionality is possible with arrays and LUNs, but the admin is now left to manage both LUNs and a separate backup system.
Dormant storage concepts re-animated
It wasn’t much of a shock when a VVOLs deep-dive presentation at VMworld 2016 included a slide with huge font stating, “LUNs Suck”. Reasons touted include: siloed management, rigid infrastructure, complexity, no visibility into storage, and the wrong granularity of management objects (LUNs, not VMs). Array vendors would have you believe that the answer to the management problem above is their VVOL implementation in which VMware provides the array vendors with information on what a VM constitutes. In theory, the arrays can now manage data at the right granularity and the problem is solved. Not so fast.
The problem with array implementations is they do not solve the entire problem because:
- The number of VVOLs (ie VMDKs) and snapshots supported by arrays are very limited relative to numbers of VMs and VMDKs
- Worse yet, there is no integrated, searchable snapshot catalog making it impossible to manage VM snapshots at scale.
- There is no dynamic binding of new VMs to existing snapshot and replication policies so you must apply policies manually; and
- There is no way to snapshot or restore files independently of VMs, like OVAs or ISO images.
In short, the vendors took the exact same LUN mindset from the last 30 years, applied it to VVOLs, and checked a box. The net of it is that VVOL implementations do not really solve the data protection problem. The admin is still left with the problems managing LUNs, and dealing with a separate data protection system.
None of this is necessary. What admins deserve is a consumer-grade UI to protect and manage tens of thousands of VMs, a capability that will not sprout from re-animating dormant genes.
Continuing evolution vs re-animating dormant genes
Unlike the Chickenosauraus-inspired storage solutions, Datrium’s DVX system is based on clean-sheet thinking. There are no SANs or LUNs to manage with all the inherent limitations that they bring. Files, VMDKs, VMs and even containers are first-class management objects. Policies (like snapshot schedule, replication frequency and geographic location) can be set on these objects to map to your application needs. Dynamic policy binding allows snapshot and replication policy linkages to extend to new VMs automatically. Recovery of objects is at a VM or VMDK granularity, not LUNs. And at the heart of it all, there is an integrated, searchable snapshot catalog.
Combined with elastic replication, you no longer need to purchase a separate data protection/backup solution. And of course, there is a whole litany of other benefits with the architecture — no network latency for reads from low latency flash devices, you can adopt newer hardware technologies like 3DXPoint as they arrive, adaptive pathing lets you use multiple network links for availability and performance with zero config, and on and on.
A Chickenosaur would be an interesting pet, but that’s not what you need in your datacenter. Datrium is proving that modern day infrastructure comes from evolved thinking, not resurrecting dormant genes of the past that should remain dormant. Datrium customers worry about applications and VMs, not infrastructure!