Glossary
Estimation best practices use reference class forecasting, ranges, and component breakdown to improve accuracy. Learn what makes estimates more reliable.
I've shipped hundreds of estimates across three companies. My accuracy improved dramatically when I stopped relying on gut feel and started using historical data from our actual codebase.
Estimation best practices are the techniques and habits that make software effort estimates more accurate, more honest, and more useful for planning. The distinction between estimation (prediction) and commitment (promise) is foundational: estimation is your best guess at how long something will take based on available information; commitment is a promise to deliver by a date. Not making this distinction creates confusion.
Accurate estimation enables credible planning. When estimates are honest and grounded in reality, product roadmaps become achievable. When estimates are optimistic, roadmaps become fiction.
Estimation practices also affect engineering culture. Teams that discuss estimation honestly develop better planning discipline. Teams that use estimation to disguise optimism ("the estimate was right, we just didn't execute") lose credibility.
1. Separate Estimation from Commitment
Estimate: "Based on what I know, this will probably take 5-7 weeks."
Commitment: "I commit to delivering this by March 15."
These are different conversations. Estimation is about prediction. Commitment is about promise. If you estimate 5-7 weeks and someone asks you to commit to 4 weeks, that's a negotiation, not estimation. Be clear about which conversation you're having.
2. Use Historical Data
Reference class forecasting: "We built something similar last quarter. That was 6 weeks. This is slightly bigger, so I estimate 7 weeks."
When reference data exists, use it. It beats intuition. When reference data doesn't exist, you're estimating something novel. Acknowledge that. "We have no reference class, so this is high uncertainty. I estimate 5-9 weeks."
3. Break Down Scope
Large estimates are inaccurate because large tasks have more unknowns. Break a large feature into smaller pieces.
Instead of: "Export feature: 8 weeks"
Do this: "Generate CSV from user table (2 weeks), Handle permissions (1 week), Add GDPR compliance (2 weeks), Create async job (1 week), Build progress UI (1 week), Testing and edge cases (1 week). Total: 8 weeks."
The sum is the same, but each piece is estimated with more confidence.
4. Always Express Uncertainty
Estimates are predictions. Predictions have uncertainty. Say so.
"This is 5-7 weeks, depending on how stable the third-party API is."
"This is 3 weeks if we reuse existing code, 6 weeks if we rebuild."
Not expressing uncertainty creates false confidence. "5 weeks" sounds precise. "5-7 weeks" is honest.
5. Review Code Before Estimating
Estimates improve dramatically when estimators have visibility into the codebase. 30 minutes reading relevant code beats guessing.
Before estimating an API change, look at the existing API code. Before estimating a database change, look at the schema and migration process. You'll estimate better because you understand the real constraints.
6. Don't Negotiate Estimates Downward
When an estimate feels high, ask why. If the answer is "the code is complex" or "we have low test coverage," that's real information. Don't negotiate that away. Instead, ask: "Can we reduce scope?" or "Can we refactor the complex code first?"
Negotiating estimates downward creates optimistic bias. The estimate doesn't improve - your confidence just becomes misplaced.
Practicing these leads to:
Estimation sounds like a dry technical topic. But in practice, it's a lever for team health.
"Estimates should be 100% accurate." No. They should be honest about uncertainty. An estimate of 5-7 weeks with full transparency about dependencies is better than an estimate of 5 weeks with false precision.
"Adding more estimators improves accuracy." No. More estimators with the same information reach similar conclusions. Adding estimators who have more context (they know the relevant code) improves accuracy. Having one estimator deeply review the code beats having five estimators guess.
"We should estimate in time, not story points." Either works, but time is clearer. "5-7 weeks" is more transparent than "8 points." If you use story points, be clear about the relationship to time: "Our velocity is 8 points per week, so 8 points = 1 week."
"Estimation is just the engineer's job." Estimation is a product team conversation. The engineer provides technical estimates. The PM provides context about dependencies and business constraints. The conversation should include both.
Q: Should we estimate in story points or time? A: Time is clearer. "5 weeks" is more transparent than "8 story points." But if your team uses story points effectively, that works too. What matters is that you map story points to time and that everyone uses the same mapping.
Q: How do we estimate refactoring work? A: The same way as feature work. Break it down. "Refactor payment module: extract common logic (2 weeks), refactor webhook processing (1 week), add tests (1 week). Total: 4 weeks." Refactoring takes time. Own that.
Q: How often should we re-estimate as we learn more? A: Continuously. Estimates are predictions based on available information. As you learn more, update your estimate. If you start a task and halfway through discover it's more complex than expected, update your estimate and communicate the change to stakeholders.
Q: What if engineers consistently miss their estimates? A: First, check if the issue is estimation or execution. Are estimates optimistic (engineering underestimates)? Or is execution slow (estimates are right, but things take longer)? If estimates are consistently optimistic, invest in estimation discipline. If execution is slow, investigate the blockers.
Keep reading
Related resources