Menu
Course/Messaging & Communication Patterns/Sequential Convoy Pattern

Sequential Convoy Pattern

Process related messages in order: session-based routing, partition keys, ordered queues, and maintaining sequence without sacrificing throughput.

10 min read

The Ordering Problem

Message queues with multiple consumers process messages in parallel, which dramatically improves throughput. But parallelism destroys ordering: consumer 1 might finish processing message 3 before consumer 2 finishes message 1. For many use cases (analytics, notifications, cache updates) this is fine. For others it is catastrophic.

Consider a financial account: if you process a deposit of $100 out of order before a withdrawal of $200 (which should have failed), you end up with an invalid ledger state. Or consider an e-commerce order lifecycle: `OrderPlaced → PaymentConfirmed → Shipped → Delivered` must be processed in strict sequence for each individual order — but orders for different customers are completely independent and can be parallelized freely.

ℹ️

Key Insight

You rarely need global ordering across all messages. You almost always only need ordering within a logical group — a specific user ID, order ID, account ID, or session ID. The Sequential Convoy pattern exploits this: guarantee ordering within a group while processing different groups in parallel.

How Sequential Convoy Works

The pattern works by assigning every message to a logical convoy — a group that must be processed sequentially. A partition key (e.g., `orderId`, `userId`, `accountId`) is attached to each message. The broker uses this key to route all messages with the same key to the same partition or queue, where they form a FIFO convoy and are processed by a single consumer instance.

Loading diagram...
Sequential Convoy: partition key routes same-orderId messages to the same partition, ensuring ordered processing

Implementation: Kafka Partitioning

Kafka implements this natively through partition keys. When you produce a message, you specify a key (e.g., `orderId`). Kafka hashes the key to determine the partition number. All messages with the same key always land in the same partition. Within a partition, messages are strictly ordered by offset. A consumer group assigns exactly one consumer per partition, so ordering is guaranteed without any additional coordination.

typescript
// Kafka producer: use orderId as the partition key
await producer.send({
  topic: "order-events",
  messages: [
    {
      key: order.id,          // ← partition key; same orderId → same partition
      value: JSON.stringify({
        type: "OrderPlaced",
        orderId: order.id,
        customerId: order.customerId,
        timestamp: Date.now(),
      }),
    },
  ],
});

// All events for order "ord-123" will land in the same partition
// and be processed in sequence by the same consumer instance

Implementation: SQS FIFO Queue with Message Groups

AWS SQS FIFO queues implement the Sequential Convoy pattern through Message Group IDs. Messages with the same `MessageGroupId` are processed in strict FIFO order. Different message groups can be processed concurrently. The trade-off: SQS FIFO queues are limited to 3,000 transactions per second (or 300 without batching), whereas Kafka with many partitions scales much higher.

typescript
// SQS FIFO: MessageGroupId enforces per-order ordering
await sqs.sendMessage({
  QueueUrl: "https://sqs.us-east-1.amazonaws.com/123456789/orders.fifo",
  MessageBody: JSON.stringify({ type: "PaymentConfirmed", orderId: "ord-456" }),
  MessageGroupId: "ord-456",                     // ← all events for this order are ordered
  MessageDeduplicationId: crypto.randomUUID(),   // ← required for FIFO; prevents duplicates
}).promise();

Hotspot / Hot Partition Problem

The Sequential Convoy pattern introduces a subtle failure mode: if one partition key is extremely hot (e.g., a viral seller generating 90% of all events), the partition handling that key becomes a bottleneck while other partitions sit idle. This is the hot partition problem.

MitigationDescriptionTrade-off
Sub-partition keysAppend a suffix to the key: `sellerId-0`, `sellerId-1`Must aggregate results from multiple partitions; ordering only within sub-partition
Increase partition countMore partitions = better distribution for diverse keysDoes not help for a single hot key; cannot reduce partitions later without downtime
Separate topic/queueRoute hot keys to a dedicated high-throughput topicOperational complexity; requires special casing

When to Use Sequential Convoy

  • Financial ledgers: debits and credits for the same account must be applied in order
  • Order lifecycle events: `Placed → Paid → Fulfilled → Delivered` for each order
  • User session events: clickstream events within a session must be replayed in order
  • Database change events (CDC): row-level changes from a CDC stream must be applied in order per primary key
  • Game state updates: player state mutations must be applied sequentially per player
💡

Interview Tip

If an interviewer asks how to guarantee ordering in a distributed message system, do NOT say 'use a single queue with one consumer' — that's a throughput killer. Instead explain Sequential Convoy: partition by a logical key so ordering is preserved within a group while different groups process in parallel. Then mention Kafka partition keys or SQS FIFO MessageGroupId as the concrete implementation.

📝

Knowledge Check

5 questions

Test your understanding of this lesson. Score 70% or higher to complete.

Ask about this lesson

Ask anything about Sequential Convoy Pattern