Cache Eviction Policies
LRU, LFU, FIFO, TTL, and random eviction. How each policy works, memory overhead, and when to pick one over another.
Why Eviction Policy Matters
A cache has finite memory. When it fills up, the cache must decide which entries to remove to make room for new ones. The eviction policy determines that decision. A poor choice can trash your hit rate: evict the wrong entry and you cause a database round-trip for the next request; evict the right one and you have room for a much more popular key. Cache eviction is one of those deceptively simple concepts that has a huge impact on real-world performance.
LRU — Least Recently Used
LRU evicts the entry that has not been accessed for the longest time. The intuition: if you haven't touched something in a while, you probably won't soon. LRU is implemented with a doubly-linked list and a hash map. Every `get` moves the entry to the head; when eviction is needed, the tail is removed.
# LRU Cache — O(1) get and put using OrderedDict
from collections import OrderedDict
class LRUCache:
def __init__(self, capacity: int):
self.capacity = capacity
self.cache = OrderedDict() # preserves insertion/access order
def get(self, key: str):
if key not in self.cache:
return None
self.cache.move_to_end(key) # mark as recently used
return self.cache[key]
def put(self, key: str, value) -> None:
if key in self.cache:
self.cache.move_to_end(key)
self.cache[key] = value
if len(self.cache) > self.capacity:
self.cache.popitem(last=False) # evict LRU (front)LRU works well for most general-purpose caches because temporal locality is common: data accessed recently tends to be accessed again soon. Redis implements LRU (approximated via sampling) and also an exact LRU variant using a per-key `lru_clock`.
LRU scan resistance
LRU is vulnerable to cache scan patterns. A one-time sequential scan of a large dataset will evict the entire working set of hot keys. Use LRU-K or SLRU (segmented LRU) variants for workloads with occasional large scans.
LFU — Least Frequently Used
LFU evicts the entry with the lowest access count. The intuition: items that have been hit many times are probably popular and should stay. LFU is more scan-resistant than LRU — a one-time scan doesn't evict genuinely popular keys because those keys have higher counts.
Cons: LFU has higher implementation complexity (requires a frequency counter and a min-heap or frequency bucketed list). More importantly, it suffers from frequency pollution: an entry that was very popular in the past but is no longer accessed will stick around displacing fresh, potentially popular entries. The solution is aging — decay counts over time. Redis's `allkeys-lfu` policy approximates this with a 24-bit counter that decays logarithmically.
FIFO — First In, First Out
FIFO evicts the oldest entry by insertion time, regardless of how recently or frequently it was accessed. Simple to implement (a queue). Poor hit rate for most real workloads because age alone is a weak predictor of future access. Primarily used in specialized hardware caches or when simplicity outweighs hit rate.
TTL — Time to Live
TTL is not a replacement for LRU/LFU — it is an additional expiration mechanism. Every cache entry is assigned a maximum lifetime. After the TTL expires, the entry is considered stale and removed (lazily on next access, or eagerly by a background sweep). TTL is the primary cache invalidation tool in most systems.
Setting the right TTL is an art. Too short: frequent cache misses and database hammering. Too long: stale data served to users after the underlying record changes. Common patterns: short TTL (seconds) for highly volatile data like stock prices, medium TTL (minutes) for user session data, long TTL (hours/days) for static reference data like country codes.
Random Eviction
Random eviction picks a victim at random. Surprisingly, it performs comparably to LRU in many workloads and is cheaper to implement. Redis uses it as `allkeys-random`. It's a reasonable choice when you cannot predict access patterns and want minimal overhead.
Eviction Policy Comparison
| Policy | Evicts | Memory Overhead | Scan Resistant | Best For |
|---|---|---|---|---|
| LRU | Least recently accessed | Low (doubly linked list + hashmap) | No | General workloads with temporal locality |
| LFU | Least frequently accessed | Medium (frequency counters) | Yes | Stable hotspots, recommendation caches |
| FIFO | Oldest inserted | Very low (queue) | No | Simple/predictable access patterns |
| TTL | Expired entries | None extra | N/A | Volatile data needing time-based freshness |
| Random | Random entry | None extra | Partial | When access patterns are uniform or unknown |
Redis Eviction Policies in Practice
Redis exposes eviction policy via the `maxmemory-policy` config option. The available policies combine scope (all keys vs only keys with TTL) with algorithm (LRU, LFU, random, TTL-nearest):
- `noeviction` — return errors when memory is full (default, safest for durability)
- `allkeys-lru` — LRU across all keys (most common for pure caches)
- `volatile-lru` — LRU only among keys that have an expiry set
- `allkeys-lfu` — LFU across all keys (best for skewed access patterns)
- `volatile-ttl` — evict keys closest to expiration first
- `allkeys-random` — random eviction across all keys
Production recommendation
For a Redis cache (not a primary store), use `allkeys-lru` or `allkeys-lfu`. This lets Redis evict any key, preventing memory full errors. Reserve `noeviction` for Redis instances used as a primary data store where data loss is unacceptable.
Interview Tip
When asked about eviction, don't just recite LRU. Show you understand the trade-offs: 'I'd start with LRU because temporal locality holds for most of our access patterns, but if I observed that our working set is larger than the cache — meaning LRU thrashes — I'd consider LFU or increasing the cache size. I'd also pair any policy with appropriate TTLs to bound staleness.' That kind of reasoning demonstrates operational maturity.