Back-of-the-envelope calculations: how accurate do they need to be?
Carlos Kowalski
·109 views
When doing back-of-the-envelope calculations for system design, I often wonder how accurate they really need to be. Is it enough to be in the right order of magnitude (e.g., 'millions of requests per second' versus 'hundreds of thousands'), or do interviewers expect more precision? I've seen some folks whip out exact numbers for storage, bandwidth, and QPS, and it makes me question my approach.
My current strategy is to focus on broad estimates to identify bottlenecks and proportionality, rather than getting bogged down in exact figures. For example, quickly estimating storage for a user base or throughput for a specific API endpoint. But I'm always nervous it's not 'precise enough.'
It would be super helpful to have a kind of cheat sheet for common numbers: average tweet size, typical QPS for common services, latencies for different network hops or disk accesses. What's your experience? Is 'good enough' truly good enough, or should I be striving for more exact figures in a 45-minute interview?
4 comments