Menu
Course/Message Queues & Streaming/RabbitMQ & Traditional Message Brokers

RabbitMQ & Traditional Message Brokers

RabbitMQ architecture: exchanges, queues, bindings, routing keys. How it differs from Kafka and when to choose it.

12 min read

RabbitMQ Architecture

RabbitMQ is an AMQP-based message broker built around a powerful routing model. Unlike Kafka's producer-to-partition direct model, RabbitMQ introduces an indirection layer: producers publish to exchanges, exchanges route to queues via bindings, and consumers subscribe to queues. This routing flexibility is RabbitMQ's defining strength.

Loading diagram...
RabbitMQ topic exchange routing messages based on routing key patterns — wildcards * (one word) and # (zero or more words)

Exchange Types

Exchange TypeRouting LogicUse Case
DirectRoute by exact routing key matchSimple queue distribution, point-to-point
FanoutBroadcast to all bound queues (ignores key)Notifications, cache invalidation
TopicRoute by pattern matching (*, #)Category-based routing, multi-tenant systems
HeadersRoute by message header attributesComplex filtering without key structure

Message Acknowledgment and Durability

RabbitMQ supports manual acknowledgment (the consumer explicitly acks after processing) and automatic acknowledgment (acked when delivered). For durability, you need both durable queues (survive broker restart) and persistent messages (written to disk). Without both, messages are lost on restart.

python
import pika

connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()

# Declare durable queue (survives broker restart)
channel.queue_declare(queue='tasks', durable=True)

def callback(ch, method, properties, body):
    print(f"Processing: {body}")
    # ... do work ...
    # Manually ack AFTER successful processing
    ch.basic_ack(delivery_tag=method.delivery_tag)

# prefetch_count=1: don't dispatch a new message until this one is acked
channel.basic_qos(prefetch_count=1)
channel.basic_consume(queue='tasks', on_message_callback=callback)
channel.start_consuming()
💡

Use basic_qos(prefetch_count=1) for fair dispatch

By default, RabbitMQ dispatches messages round-robin without knowing whether a consumer is busy. Setting prefetch_count=1 tells RabbitMQ not to give a consumer more than one unacked message. This ensures busy consumers don't accumulate a backlog while idle consumers wait.

RabbitMQ vs Kafka: Head-to-Head

DimensionRabbitMQKafka
ParadigmSmart broker, dumb consumerDumb broker, smart consumer
Message retentionDeleted after ackRetained for configured period
OrderingPer-queue FIFOPer-partition ordering
Throughput~50K msgs/sec per nodeMillions msgs/sec
RoutingRich (exchanges, bindings)Simple (topic + partition key)
ReplayNot supportedFull replay within retention window
Consumer trackingBroker tracks acksConsumer tracks offsets
Best forTask queues, complex routing, RPCEvent streaming, high throughput, replay

Amazon SQS and Azure Service Bus

For teams that want managed traditional queuing without running their own broker, cloud-native options are excellent choices. Amazon SQS is a fully managed queue service with standard (at-least-once, best-effort ordering) and FIFO (exactly-once, strict ordering up to 300 TPS) variants. Azure Service Bus offers queues and topics with rich features like sessions, dead-lettering, message deferral, and scheduled delivery.

ℹ️

SQS Standard vs FIFO

SQS Standard queues have virtually unlimited throughput but may deliver messages out of order and occasionally deliver duplicates. SQS FIFO queues guarantee exactly-once processing and strict ordering within a message group but cap at 300 TPS (3,000 with batching). Choose Standard for high-throughput tasks where ordering doesn't matter; FIFO for financial transactions or order sequencing.

When to Choose RabbitMQ or SQS

  • Task queues — Background jobs (image processing, email sending, PDF generation) where each task should be processed once
  • Complex routing — Route messages to different consumers based on content or type without custom code
  • RPC over messaging — Request/reply patterns where you need a response from the consumer
  • Moderate throughput — Up to tens of thousands of messages per second
  • Operational simplicity — SQS requires zero broker management; RabbitMQ is simpler than Kafka
💡

Interview Tip

When comparing RabbitMQ and Kafka in an interview, the key insight is: RabbitMQ is a 'smart broker' that routes and tracks message delivery; Kafka is a 'dumb broker' that just appends to a log and lets consumers track their own position. This makes Kafka scale better but gives RabbitMQ more routing flexibility. If you hear 'task queue' or 'background jobs,' lean toward RabbitMQ/SQS. If you hear 'event streaming,' 'high throughput,' or 'replay,' lean toward Kafka.

📝

Knowledge Check

4 questions

Test your understanding of this lesson. Score 70% or higher to complete.

Ask about this lesson

Ask anything about RabbitMQ & Traditional Message Brokers