Module 10
Patterns that make systems fast and scalable: cache-aside, write-behind, index tables, claim check, and pipes-and-filters processing.
Load data into cache on demand: the cache-aside flow, cache miss handling, consistency considerations, and stampede prevention.
Synchronous vs asynchronous cache writes: write-through for consistency, write-behind (write-back) for performance, and their failure modes.
Optimize reads independently from writes: denormalized read stores, projection patterns, and keeping read models eventually consistent.
Horizontal data partitioning for write scalability: shard key strategies, range vs hash sharding, cross-shard queries, and rebalancing.
Create secondary indexes as separate tables: enabling efficient queries on non-primary-key fields in NoSQL and sharded databases.
Serve static assets directly from object storage and CDNs: reducing server load, cache headers, versioned deployments, and edge computing.
Reduce message size by storing large payloads externally: the claim check flow, storage backends, and maintaining message processing efficiency.
Decompose processing into a pipeline of independent filters: composability, parallel execution, error handling, and real-world data pipeline examples.
Two models for coordinating services: event-based choreography (decentralized) vs command-based orchestration (centralized). Trade-offs and when to use each.