Course → Module 8: Real-World Case Studies II

The Problem

A user has a 2GB folder synced across three devices: a laptop, a phone, and a desktop. They edit a 100MB presentation on the laptop. The system must sync that change to the other two devices. Uploading the entire 100MB file every time a single slide changes is wasteful. If two devices edit the same file simultaneously, the system must detect and resolve the conflict without losing either person's work.

Cloud storage is a deceptively complex system. The user sees a folder. Behind it is a distributed architecture handling deduplication, chunking, delta synchronization, conflict detection, and metadata management across millions of concurrent users.

High-Level Architecture

graph TD C1[Client 1
Desktop] --> SA[Sync Agent] C2[Client 2
Laptop] --> SA C3[Client 3
Phone] --> SA SA --> MS[Metadata Service] SA --> BS[Block Storage
S3 / GCS] SA --> NS[Notification Service
Pub/Sub] MS --> DB[(Metadata DB
File tree, versions, hashes)] NS --> C1 NS --> C2 NS --> C3

The architecture separates two fundamentally different concerns. The metadata service tracks the file tree: which files exist, their sizes, version numbers, and block hashes. The block storage service stores the actual file data as content-addressed chunks. This separation is the foundation that makes deduplication and delta sync possible.

Block-Level Chunking

Files are not stored as single objects. Each file is split into fixed-size blocks, typically 4MB each. Each block is hashed using SHA-256 to produce a content fingerprint. The file's metadata record stores the ordered list of block hashes that compose it.

This chunking approach enables three capabilities:

Block-Level Deduplication

If two users upload the same 4MB block, only one copy is stored. The system checks: does a block with this SHA-256 hash already exist in storage? If yes, skip the upload and just reference the existing block. If no, upload the block and register its hash.

The impact at scale is enormous. Consider a company where 500 employees all have the same 50MB onboarding PDF in their synced folders. Without deduplication, that is 25GB of storage. With block-level dedup, it is 50MB. The other 499 copies are just metadata pointers to the same blocks.

Deduplication also applies within a single user's files. If you copy a folder, the blocks already exist. The system creates new metadata entries pointing to existing blocks. The copy is instant from a storage perspective.

Key insight: Separating metadata from block storage is what makes deduplication work. A file is just an ordered list of block hashes. "Copying" a file means copying a list of hashes. The blocks themselves are immutable, content-addressed objects shared across all users.

Delta Sync

A user edits one slide in a 100MB presentation. The file is composed of 25 blocks (at 4MB each). Only the block containing the modified slide has changed. Delta sync identifies which blocks changed and uploads only those.

sequenceDiagram participant C as Client participant S as Sync Agent participant M as Metadata Service participant B as Block Storage C->>S: File modified: presentation.pptx S->>S: Re-chunk file, compute block hashes S->>M: Compare new hashes with stored hashes M-->>S: Blocks 1-24 unchanged. Block 17 is new. S->>B: Upload only Block 17 (4MB) B-->>S: Block stored, hash registered S->>M: Update file metadata: block 17 hash updated M-->>S: Version incremented S->>C: Sync complete

Instead of uploading 100MB, the client uploads 4MB. That is a 96% reduction in bandwidth. For users on slow connections or mobile networks, this difference is the reason the product is usable at all.

The mechanism relies on the rsync-style algorithm. The client computes a rolling checksum across the modified file to identify which block boundaries shifted. For each block, it computes the SHA-256 hash and compares it against the previously stored hashes. Only blocks with new hashes are uploaded.

Sync Strategy Comparison

Strategy What Transfers Bandwidth Cost (100MB file, 1 line changed) Complexity Use Case
Full Sync Entire file every time 100 MB Low Simple backup, initial upload
File-Level Delta Only modified files (entire file if any change) 100 MB Low Basic sync tools
Block-Level Delta Only modified blocks within modified files 4 MB (one block) Medium Dropbox, Google Drive
Byte-Level Delta Only changed bytes (binary diff) ~1 KB High rsync, specialized tools

Block-level delta is the sweet spot for cloud storage. It captures most of the savings of byte-level delta with significantly less computational overhead. Computing a binary diff of a 100MB file is expensive. Comparing 25 block hashes is cheap.

Conflict Resolution

Two users edit the same file on different devices while offline. Both devices come online and attempt to sync. The metadata service detects a conflict: both edits are based on the same parent version, but they produce different block hashes.

Three strategies handle this:

The conflict copy approach is the safest default. It never loses data. The cost is user inconvenience: someone must manually reconcile the two versions. For most cloud storage products, this trade-off is acceptable because conflicts are rare. Most files are edited by one person at a time.

Notification and Propagation

When the laptop syncs a change, the desktop and phone need to know. The notification service uses a pub/sub pattern. Each device subscribes to a channel for its user's file tree. When the metadata service records a new version, it publishes a notification. Each subscribed device pulls the updated metadata and downloads any new blocks.

For devices that are offline, the notification queues until the device reconnects. On reconnection, the sync agent pulls all pending changes and applies them in version order.

Further Reading

Assignment

A user has a 100MB file synced via cloud storage. They change one line in the file. Design the sync flow.

  1. Without delta sync, how much data transfers? With block-level delta sync (4MB blocks), how much? With byte-level delta, approximately how much?
  2. The file is split into 25 blocks. Describe how the client determines which blocks changed. What hash function is used, and why?
  3. Two devices edit the same file while offline. Device A changes block 3. Device B changes block 17. Can the system merge these changes automatically? What if both devices changed block 3?
  4. A company has 1,000 employees. Each has a copy of the same 200MB training video. How much total storage does the system use with and without deduplication?