Menu

Message Broker Pattern

Central message routing: broker topologies, message persistence, delivery guarantees, and comparing brokers (Kafka, RabbitMQ, SQS, Pulsar).

15 min readHigh interview weight

What Is a Message Broker?

A message broker is an intermediary that accepts messages from producers, persists them reliably, and routes them to the appropriate consumers. Unlike direct point-to-point HTTP calls, the broker decouples the sender from the receiver in time, space, and synchronization. The producer does not need to know whether any consumer is running, how many consumers exist, or where they live.

Brokers provide three fundamental capabilities: routing (deciding which queue or topic a message belongs to), persistence (durably storing messages so they survive restarts and failures), and delivery guarantees (ensuring messages reach consumers as intended). Everything else — ordering, filtering, dead-letter handling, priority — is built on top of these three primitives.

Broker Topologies

Loading diagram...
Message broker as the central hub: producers send to the broker which routes to consumer queues

There are two dominant broker topologies in the industry:

  • Queue-based (point-to-point): Each message is consumed by exactly one consumer. Multiple consumers on the same queue compete for messages (competing consumers pattern). Best for task distribution and work queues. Example: SQS Standard Queue, RabbitMQ default queues.
  • Topic-based (pub/sub): Each message is delivered to all subscribers. Each subscriber gets its own copy. Best for event notification and broadcast. Example: Kafka topics, SNS, RabbitMQ fanout exchanges.

Message Persistence & Durability

Message persistence means the broker writes messages to disk before acknowledging the producer. If the broker crashes and restarts, no messages are lost. Most production brokers support this, but it comes with a latency cost — a `fsync` on every message can be expensive. Kafka amortizes this by batching writes to the log segment; RabbitMQ uses acknowledgement-based mirroring across nodes.

⚠️

Transient vs Persistent Messages

RabbitMQ and AMQP-based brokers allow per-message durability: `delivery_mode = 2` marks a message as persistent. If you publish with `delivery_mode = 1` to a durable queue, the message is still lost on broker restart. Both the queue AND the message must be marked durable.

Broker Comparison

BrokerModelOrderingThroughputBest For
Apache KafkaPartitioned log (pull)Per-partition orderedVery high (millions/sec)Event streaming, audit logs, replay, analytics pipelines
RabbitMQQueue + exchange (push/pull)Per-queue FIFOHigh (tens of thousands/sec)Task queues, complex routing, request-reply patterns
AWS SQSManaged queue (pull)Standard: unordered; FIFO: strictStandard: unlimited; FIFO: 3000 msg/secServerless workloads, simple AWS-native decoupling
AWS SNSManaged topic (push)No ordering guaranteeVery highFan-out to SQS/Lambda/HTTP endpoints
Apache PulsarSegmented log + queue hybrid (pull/push)Per-partition orderedVery highGeo-replicated streaming, tiered storage needs
Redis Pub/SubEphemeral channel (push)No persistenceExtremely low latencyReal-time notifications where loss is acceptable

Kafka Architecture Deep-Cut

Kafka's architecture differs fundamentally from traditional brokers. Messages are written to an immutable append-only log partitioned across brokers. Consumers track their position (offset) in the log independently. This enables: (1) multiple independent consumer groups reading the same topic at different offsets, (2) message replay by resetting an offset to a historical position, and (3) extremely high throughput through sequential disk I/O.

Loading diagram...
Kafka partitioned log: two independent consumer groups read the same topic at different offsets

RabbitMQ Exchange Types

RabbitMQ's routing is powered by exchanges — objects that receive messages from producers and route them to queues based on binding rules. Choosing the right exchange type is the primary design decision when using RabbitMQ:

Exchange TypeRouting LogicUse Case
`direct`Route by exact routing key matchTask queue, one-to-one routing
`fanout`Broadcast to all bound queues, ignores routing keyPub/sub fan-out
`topic`Wildcard pattern match on routing key (`*`, `#`)Multi-subscriber with selective routing
`headers`Route by message header attributesContent-based routing without key encoding

Choosing a Broker

  • Use Kafka when you need event replay, audit log, high-throughput streaming, or multiple independent consumer groups reading the same events at different speeds.
  • Use RabbitMQ when you need complex routing rules, request-reply patterns, per-message priority, or a traditional task queue with flexible topologies.
  • Use SQS + SNS when you are already on AWS, want zero-ops managed messaging, and need simple fan-out with serverless consumers (Lambda).
  • Use Redis Pub/Sub only for ephemeral, low-latency notifications (e.g., live presence indicators) where losing messages during a broker restart is acceptable.
💡

Interview Tip

Interviewers almost always ask: 'Would you use Kafka or SQS here?' The answer depends on three questions: (1) Do you need message replay or multiple independent consumer groups? → Kafka. (2) Are you on AWS and want managed simplicity? → SQS/SNS. (3) Do you need complex routing logic? → RabbitMQ. Explain the trade-offs rather than picking a winner.

📝

Knowledge Check

5 questions

Test your understanding of this lesson. Score 70% or higher to complete.

Ask about this lesson

Ask anything about Message Broker Pattern