With Fast Clone Tech You Can Manage Petabytes of Build & Test Artifacts in a Blink of an Eye

In this recent video, I explain how Jenkins traditionally copies artifacts and the limitations, both in terms of performance and capacity. I further explain that a typical day at Datrium consists of provisioning roughly hundreds of virtual slaves that run thousands of tests, and managing such large amounts (sometimes petabytes per day) of build and test artifacts with traditional CI/CD solutions is simply impossible.

But there’s an “aha” to this story. Because we know that Datrium DVX can manage petabytes of data using fast clone technology in a blink of an eye. We’ve proved that cloning 115G of build deliverables 500 times a day and running 7000 tests, all while generating over 1.7 peta bytes of total artifacts, is no problem. Here is a short list of benefits we now enjoy from optimizing our workflow using our own DVX technology:

  • Lowered storage requirements
  • Reduced overall runtimes
  • Preserved all artifacts
  • Improved virtual resource utilization
  • Simplified management of slaves

Fast clones provide dramatic benefits, including reduced storage requirements, improved throughput and optimal resource management. And, since each artifact clone provides the complete build and test environment, it can be used for other downstream jobs, postmortem analysis and further ad hoc testing without dirtying the original copy.

 How cool would it be if you could optimize your workflow by using virtual disk and persistent container data volume clones for artifacts? Without blinking an eye! Read the full Jenkins conquering managing artifacts story here.