Tansu.io is a new open-source messaging broker, compatible with Apache Kafka's protocol, that reimagines its architecture for lean operations by offloading durability and state management to external, already-resilient storage like S3 or PostgreSQL. This design choice results in stateless brokers, enabling rapid scaling to zero and simplified operational overhead compared to traditional Kafka clusters. Tansu also introduces integrated broker-side schema validation and direct integration with open table formats for analytics.
Read original on InfoQ ArchitectureApache Kafka is a powerful distributed streaming platform, but its operational complexity can be significant. Kafka brokers are often referred to as "pets" due to their stateful nature, requiring careful configuration, leader elections, and data replication for resilience. Scaling down Kafka clusters is notoriously difficult, contributing to higher operational costs and resource consumption. Tansu.io, introduced at QCon London 2026, aims to address these challenges by fundamentally rethinking Kafka's architecture while maintaining protocol compatibility.
The core differentiator of Tansu is its stateless broker design. Unlike Kafka, which achieves resilience through internal data replication between brokers, Tansu assumes that external storage is inherently durable and resilient. This allows Tansu brokers to be "cattle" – ephemeral, stateless entities that can be scaled up or down rapidly, even to zero, with minimal overhead (approximately 20MB resident memory and ~10ms scale-up time). This architectural shift significantly simplifies operations and resource management.
Architectural Principle: Offloading State
By offloading state management and durability to external, highly-available services (like S3 or PostgreSQL), Tansu brokers become lightweight and disposable. This is a common pattern in cloud-native architectures to improve elasticity, resilience, and operational simplicity of compute layers.
Tansu offers pluggable storage backends, providing flexibility for different use cases:
Tansu enhances data quality and analytics workflows by implementing broker-side schema validation (for Avro, JSON, Protobuf). This contrasts with standard Kafka, where schema enforcement typically relies on client-side registries and is optional. While adding a slight overhead due to decompression and validation, it guarantees data consistency. Furthermore, Tansu can directly write validated data into open table formats like Apache Iceberg, Delta Lake, or Parquet, potentially acting as a direct pipeline from Kafka-compatible producers to analytics-ready data warehouses via "sink topics".