Log-Structured File System
  • Inline dedupe, compression, erasure coding always on
  • Convert random writes into large sequential containers for maximum write throughput
  • Distribute chunks on disks for PB scale object store
Global Deduplication
  • Every chunk has a unique crypto hash so only move necessary blocks  
  • Scalable content addressable store
  • Blockchain like verification for data integrity
Scalable Backup Catalog
  • Scalable, searchable Snapshot catalog for clone, revert, replicate
  • Granular to VM, vDisk, file, container level
  • Policy management with automatic discovery

Datrium Converging Primary and Backup with Sazzala Reddy

 
These architectural foundations deliver breakthrough results including high performance random reads and writes from and to commodity devices, high data integrity from no overwrites, global dedupe index for maximium efficiency, redirect on write snapshots with no performance penalty, and designed for the public cloud services like EC2 and S3.