Simplicity and Flexibility for Private Cloud Infrastructure—Good but is it Good Enough?

Simplicity and Flexibility for Private Cloud Infrastructure—Good but is it Good Enough?

Posted by: Devin Hamilton

 

There is much to be said for simplicity in technology these days and it’s never been more true than it is for private cloud infrastructure today.

Some of the best solutions of our time have come at a huge price paid in countless hours toiling over complex systems and methods to make something look or feel simple.

Meanwhile, the world is filled with ‘cheap and easy’ as marketing concepts, but nothing more.  The actual work to make these statements true is massive.  Rarely do you find true simplicity, let alone elegance and flexibility. And then it’s rarer still to find all of these attributes at a great price point.  

 

Simplicity and flexibility

To a broad community of users or customers, simplicity is a driving force behind purchasing decisions. It makes sense considering how many fewer people there are to run today’s data centers and how many more things they have to manage. 

Out of consideration for the ongoing need for fluid changes in the enterprise, simplicity shares a strong bond with flexibility.  On a functional level these two concepts have become artificially synonymous and are considered table stakes as value for any given technology. 

Yet, a simple system is not often “flexible” too.  Typically, you get one or the other; a simple thing has a small set of jobs, and they are fixed and rigid. Flexibility, on the other hand, often warrants reconfiguration, downtime, and manipulation of key elements toward a different set of functions.  Not so simple.

 

IT challenges have changed

In the data conveyance or storage technologies realm, I’ve observed the science of usefulness to be applied to a growing but common pool of IT challenges:

  • Performance
  • Reliability
  • Capacity
  • Supportability
  • Cost
  • Future proofing

Most recently in the progress of solution development, what is ultimately deemed “efficiency” is spoken about much more frequently today.  Now often this goal translates to deriving functional improvement of existing assets, perhaps in concert with new solutions over the horizon.

 

Datrium answers the “simplicity” call

Datrium DVX is built upon the principle that you can get more out of what you have in hand today and, in doing so, reduce cost and drive better performance than was previously thought possible. 

Datrium can leverage legacy and new x86 servers neatly racked, or sitting on dev-ops shelves.  Blade based compute vehicles, highly integrated branded servers, or “good-enough” technology from the commodity standby vendors are also all fair game.  DVX also future-proofs your infrastructure if you are buying new assets by extending their projected life and enhancing density of performance.  The point is, our DVX Software needs three things:

  1. Compute – You’ve got that
  2. Flash – SSD / NVMe – You choose
  3. VMware ESX running vCenter – You’ve got that too

 

Finding efficiency—the love child of simplicity and flexibility

Here’s what I’m saying: When simplicity and flexibility are successfully integrated in a single architecture, you find efficiency as the result. 

It’s true the efforts behind this achievement are tough to pull off.  So as critical demand for new and more capable solutions increases, there are less truly efficient new ‘one-off’ technologies emerging within a given span of time (see AFAs).  We didn’t like that trend.  There’s no grit there.  So we did something completely different.  Datrium provided a complete end to end solution.  It is protective, it scales, it is very efficient. 

Now you may architect for the needs of today and grow toward workloads of tomorrow with simple incremental choices of server and flash. 

 

Making choices

We witness behaviors in emerging technology to be like survival of the fittest in the wild.  But what do we call these new solutions?  How shall they be calibrated into our daily understanding of function and form within the datacenter?  How do we harvest the best of breed above the noise? 

I suggest perhaps starting with these simple choice filters:

  • Is the solution based on efficiency vs. horsepower?
  • Does it facilitate a focus on private cloud architecture?
  • Does it lower costs while making use of existing infrastructure assets over their entire financial lifespan?
  • Does it allow you to maintain choice of Open Converged components: servers, flash, network?
  • Does it do no harm and can run alongside legacy assets until they amortize?

 

It’s 2017 and Open Convergence is a thing now.

y:inline;"> line;">