Converged Infrastructure Without the Pain? Yes, We Can!

Converged Infrastructure Without the Pain? Yes, We Can!

Posted by: Ganesh Venkitachalam

Traditional storage arrays are poorly adapted to virtualization, unable to properly leverage the latency benefits of flash storage, and hard to scale. Hyperconverged infrastructures (HCI) are a better fit for virtualization but throw away many of the benefits of storage arrays (like ease of maintenance and data durability),  and do not solve scaling issues. But there’s a fresh approach with a rackscale solution that combines the better characteristics of both storage arrays and HCI. The dream solution would ideally have these attributes:

  • High durability, high availability, always-on encryption, always-on compression & deduplication, at a far lower cost than other solutions.
  • Leverage non-volatile memory to provide low-latency and high bandwidth regardless of access pattern.
  • Provide incremental scaling with linear performance increase and no hot-spots.
  • Scale in inexpensive capacity increments.
  • Data services (snapshot, data mobility) and management at VM/Application granularity.
  • Make configuration and management streamlined and simple.

 

Something new and ‘OMG’ simple

There are existing converged infrastructure systems that satisfy some of the attributes on this list, but satisfying all of these requirement has yet to be accomplished. Until now! Have you heard of DVX Rackscale? By Datrium? Read on to learn more.  

 

Object stores

Object stores like AWS S3 are extremely durable and extremely scalable to exabytes. They have excellent bandwidth for sequential accesses (but not low latency). They are good at snapshots. They are cheap (based on spinning disk) and thus ideal for storing durable copies of data, including snapshots. Object stores achieve some of these properties by having strong consistency only if you write an object once. Object stores are a well-known technology that solve some of the requirements above.

The DVX data node is essentially that: a highly scalable, highly available, disk-based, write-once object store. The objects stored in a data node are large (think megabytes), always erasure-coded, compressed and deduplicated. The objects are written once, with strong consistency, and never over-written. The write-once behavior enables simple and cost-effective data protection, high durability, and efficient snapshots. All durable data in a DVX system is stored in objects on data nodes supplied by Datrium. When a server fails or has to be put in maintenance mode, there is zero impact on data durability or availability – unlike HCI.

However, object stores are not good at low latency, high IOPS, and random-access. Enter non-volatile memory and high-performance servers.

 

Non-volatile memory and high performance servers

Non-Volatile Memory (NVM) technologies are excellent at low latency (<100 usecs and rapidly dropping), high bandwidth, and random access. The first widely adopted instance of NVM was SSDs. The second is going to be 3D XPoint (<10 usec latency). Because of Littles Law, such storage is best placed in the server —  otherwise the networking latency will be far, far more than the actual device access latency. At high queue depths, the networking latency can be many milliseconds. You can lay out a whole new network with very high bandwidth and very low latency that supports NVMe over Fabric, which is what array vendors want you to do. But that’s neither cheap nor simple nor incremental. It’s way cheaper and easier to leverage existing/new servers by putting fast NVM devices in it with NVMe attach (or SATA/SAS attach in a pinch).

Installed on your server, DVX software works with the data nodes to leverage NVM and server compute cycles. The data in NVM on servers can be accessed with minimal latency because it’s right there on the server and indexed for low latency random IO. Look ma, no new networking gear! The data stored on NVM is compressed and deduplicated – always. This lets us cache all the data in use on a server on the NVM device on that server for cheap. Building a low-latency NVM filesystem that supports always-on compression and deduplication with only software is not easy — some of our competitors had to resort to custom hardware to achieve some of these effects. 

 

Scaling 

Both traditional arrays and HCI are complex and expensive to scale. The scaling is limited by storage controller CPU, expensive, and not incremental in the case of traditional arrays. The scaling is problematic in the case of HCI because of inter-node communication (especially between nodes with differing compute capacity), and having to scale both compute and storage together.

In the DVX system, all compute intensive operations (deduplication, compression, replication, erasure coding) are performed on the compute nodes, by the DVX Software. Each server you add to the cluster contributes compute power to storage processing. In normal operation there is zero communication between servers: reads come from server-local NVM devices and writes persist data to the data node directly from the servers.

This design achieves the ultimate scaling goal. If you’d like to add more compute, add a Datrium compute node (or add servers you purchase from your favorite vendor) to the DVX cluster. If you’d like to add more/faster NVM devices to servers, do so at will. If you’d like to add capacity, just add more data nodes. Because there’s no communication between servers in normal operation, the system scales linearly. 

 

Simplicity is the ultimate sophistication

The remaining big piece is orchestration and monitoring. Unlike most arrays, the DVX software lets you manage everything on a VM granularity, with a plugin right there in your vCenter GUI. You have everything from latencies to snapshot schedule to historical stats for IOPS, right at your fingertips, all on a per-VM granularity. All monitoring is end-to-end — if there’s a network glitch between your servers and storage, DVX software notices and alerts you. Even in the presence of network failures, Adaptive Pathing keeps data flowing with no added config. Your data is secure in-flight and at-rest with zero impact on cost or performance with Blanket Encryption (unlike any other solution out there).  Your data is always durable, always available. You get the best possible performance on the most modern hardware technology. Scaling capacity or performance is a piece of cake. All services are always on, there are no knobs

It’s extremely difficult to get to simple, but we work hard to ensure the largest deployments are as easy to manage as the smallest.

Rackscale private clouds for pain-free Converged Infrastucture? Yes, we can! 

y:inline;"> line;">