Menu
☁️Cloudflare Blog·February 3, 2026

Designing for Global Object Storage Writes with Local Uploads

This article introduces Cloudflare R2 Local Uploads, a feature designed to enhance global object storage write performance by allowing clients to upload data to the nearest Cloudflare edge location. The data is then asynchronously replicated to the bucket's primary region, ensuring immediate accessibility and strong consistency. This architecture addresses the latency challenges of cross-regional data transfers for write-heavy, globally distributed applications.

Read original on Cloudflare Blog

Cloudflare R2's Local Uploads feature fundamentally changes how global object storage writes are handled, moving from a direct-to-bucket region model to an edge-first, asynchronous replication model. This significantly reduces Time to Last Byte (TTLB) for cross-regional uploads, making data ingress faster and more reliable for globally distributed users and applications.

The Challenge of Global Uploads

Traditional object storage often requires data to travel the full distance to the bucket's home region, leading to increased latency and variability for clients in different geographical locations. This bottleneck becomes more pronounced for applications with a global user base or devices widely distributed for data collection, where upload performance is critical.

R2 Local Uploads Architecture

When Local Uploads are enabled, object data is initially written to the Cloudflare storage infrastructure closest to the client. Crucially, the object becomes immediately accessible upon this local write completion. Subsequently, an asynchronous replication process copies the data to the bucket's designated primary region, maintaining strong consistency throughout. The system uses Cloudflare's global network for both reads (cached) and now, with Local Uploads, for writes.

Asynchronous Replication Mechanism

The asynchronous replication is managed via Cloudflare Queues. When metadata is published for an object with Local Uploads, three operations occur atomically: object metadata storage, creation of a pending replica key (detailing replication plan), and a replication task marker. A background process scans these markers and dispatches tasks to regional queues. A centralized polling service then consumes tasks from these queues, batches them, and dispatches them to Gateway Workers for execution. Workers read data from the local source, write to the destination, and update metadata, ensuring at-least-once delivery and allowing dynamic pace adjustment based on system health.

💡

System Design Implication: Decoupling and Eventual Consistency

The Local Uploads design elegantly uses asynchronous processing (Cloudflare Queues) to decouple the client-facing upload operation from the cross-regional data replication. While the object is immediately accessible, the underlying data movement operates on an eventually consistent model for the primary bucket location. This trade-off prioritizes immediate write performance and availability for the client.

Key Components Overview

  • R2 Gateway Worker: Entry point for API requests, handles authentication and routing.
  • Durable Object Metadata Service: Distributed layer for storing and managing object metadata, ensuring strong consistency.
  • Distributed Storage Infrastructure: Underlies persistent storage of encrypted object data.
  • Cloudflare Queues: Used for asynchronous processing and rate control of replication tasks, providing built-in failure handling (retries, dead letter queues).
  • Polling Service: Centralized consumer for regional queues, dispatches replication jobs to Gateway Workers.
object storageCDNglobal distributionasynchronous replicationlow latencyCloudflare R2distributed writeseventual consistency

Comments

Loading comments...