Menu
Back to Discussions

Optimizing cache invalidation for highly concurrent, eventually consistent systems?

Felix Sato
Felix Sato
·2 views
Hey everyone, I'm working on a system where we have a very high read-to-write ratio, and our data consistency model is eventually consistent. We're using a distributed cache, and I'm finding that cache invalidation is becoming a bottleneck and a source of subtle bugs. We've tried TTLs and simple write-through invalidation, but with the concurrency, we're seeing stale data longer than acceptable or thundering herd issues when items expire. Has anyone implemented more sophisticated cache invalidation strategies for similar scenarios? I'm thinking about things like versioning data in the cache, or perhaps using a publish-subscribe model for invalidation events. What are the common pitfalls, and what approaches have you found to be effective in keeping cache coherence high without introducing too much complexity or performance overhead?
0 comments

Comments

Sign in to join the conversation.

Loading comments...
Optimizing cache invalidation for highly concurrent, eventually consistent systems? — SysDesAi Discussions | SysDesAi