Glossary
Project estimation accounts for coordination costs, unknown unknowns, and codebase complexity. Learn methods to forecast project duration and manage uncertainty.
I've shipped hundreds of estimates across three companies. My accuracy improved dramatically when I stopped relying on gut feel and started using historical data from our actual codebase.
Software project estimation is the practice of predicting how long a software project will take to complete. It encompasses estimating: development time, testing time, and overall project duration from start to finish.
Software project estimation is notoriously difficult. Some estimates are 2x reality. Some are 10x reality. This isn't because engineers are bad estimators - it's because software work involves unknowns that are hard to predict: technical surprises, scope creep, dependencies on other teams, and the unpredictability of human productivity.
Accurate estimation enables credible roadmaps. When estimates are honest and realistic, product roadmaps become achievable. When estimates are optimistic, roadmaps become fiction.
Estimation failures cascade. When you promise a project will ship in 8 weeks and it takes 16 weeks, multiple problems follow: you miss commitments, you damage your credibility with leadership, the team feels schedule pressure and cuts corners, technical debt accumulates, and team morale suffers.
Estimation accuracy is one of the highest-leverage things a PM can influence. Better estimation = better planning = better outcomes.
Teams estimate differently:
Story points: Relative sizing (this is bigger than that). "Feature A is 8 points, Feature B is 3 points." Teams then track velocity (how many points they complete per week). Estimation: points. Timeline: infer from velocity.
Time-based: Estimate in hours, days, or weeks. "Feature A will take 4 weeks." Timeline: direct.
Both work. Story points add a layer of abstraction (you estimate relative size, not absolute time). Time-based estimates are more direct. Story points are useful for teams with variable team composition or high uncertainty. Time-based is clearer for external communication ("when will this ship?").
The key: whatever scale you choose, estimate in that scale from the start, and map to time eventually. If you use story points, what does a point equal in time? "Our velocity is 8 points per week" means 1 point ≈ 4 hours. That's the mapping.
1. Break down scope. What exactly are we building? Break it into smaller pieces. "Export feature" becomes: "generate CSV file", "add permissions layer", "build progress UI", "write documentation".
2. Estimate each piece. How long will each piece take? Use reference data when possible ("we did something similar last quarter, it was 3 weeks, this is slightly bigger so 4 weeks"). When no reference data exists, be honest about uncertainty ("5-7 weeks, high uncertainty because we haven't done this before").
3. Add up the pieces. Sum all estimates. Don't forget: testing, code review, documentation, deployment.
4. Add buffer. Add 15-30% for unknowns, surprises, and dependencies. "Sum is 4 weeks, adding 20% buffer = 4.8 weeks, round to 5 weeks."
5. Communicate clearly. "This project is estimated at 5 weeks. Our estimate assumes X, Y, and Z. If any of those assumptions change, the estimate changes."
"We're experienced, so we estimate accurately." Experience helps, but biases persist. Experienced teams can still be optimistic. The antidote: honest assessment of past estimates vs. actual time. Did we estimate 4 weeks and take 6? That's bias to understand and correct for.
"Estimation is the engineer's job." Partly. Engineers estimate development time. But project estimation requires input from: product (scope clarity), design (design time), QA (testing time), and devops (deployment time). It's a team estimate.
"We should estimate aggressively to motivate the team." Don't. Aggressive estimates that are unrealistic create schedule pressure and reduce quality. Realistic estimates enable good work.
"Everything depends on something else, so estimation is impossible." Dependencies make estimation harder, but not impossible. Quantify: "Feature A depends on framework refactor. That refactor is estimated at 3 weeks. Feature A can't start until then, so total is 3 + 4 = 7 weeks (refactor + feature)."
"We'll figure it out as we go." Fair for exploratory work. But for committed roadmaps, "figure it out as we go" means you won't know when you'll ship. That's fine for internal tools. Not fine for customer-facing features where stakeholders need timelines.
Not all estimates have the same confidence level.
High confidence: We've done this before. We understand the scope. We understand the codebase. "4 weeks, high confidence."
Medium confidence: We've done something similar. We understand most of the scope. There are some unknowns. "4-5 weeks, medium confidence."
Low confidence: This is novel. We're not sure what the scope really is. "4-8 weeks, low confidence. We'll know more after exploration."
Better to communicate confidence than to pretend all estimates are equally reliable.
1. Track estimates vs. actuals. Did we estimate 4 weeks and take 6? Track that. Over time, you'll see patterns. "We consistently overestimate by 30%". Then you can adjust (multiply estimates by 1.3x).
2. Break work into smaller pieces. Large estimates are less accurate than small estimates. Instead of "mobile app: 16 weeks", estimate: "authentication (2 weeks), user profile (1 week), feed (2 weeks), notifications (1 week), testing and polish (2 weeks), deployment (1 week)." Sum is the same, but smaller pieces are more accurate.
3. Review code before estimating. 30 minutes looking at the relevant code improves estimates significantly. You'll see patterns, dependencies, and complexity that you wouldn't otherwise. AI coding agents like those covered in Glue vs Potpie.ai can help automate some of this code review work.
4. Involve the team. Different people estimate differently. You might estimate 2 weeks, another engineer might estimate 4. Discuss the difference. One of you is probably right, or the truth is in the middle.
5. Include non-coding work. Testing: 1 week. Code review: 3 days. Documentation: 2 days. Deployment: 1 day. These add up and are often forgotten.
Different estimation methods work better in different contexts. Here are the most widely used approaches:
Planning Poker (Scrum Poker). Team members independently estimate each item using Fibonacci-like numbers (1, 2, 3, 5, 8, 13, 21). High and low estimators explain their reasoning. The team converges on a number. Best for: sprint-level estimation with co-located teams.
T-Shirt Sizing (XS, S, M, L, XL). Relative sizing without numbers. Useful for rough-cut estimation when you need to categorize a large backlog quickly. Convert to numbers later if needed. Best for: roadmap planning and backlog grooming.
Three-Point Estimation (PERT). For each task, estimate optimistic (O), most likely (M), and pessimistic (P) durations. Expected = (O + 4M + P) / 6. This accounts for uncertainty mathematically. Best for: high-stakes projects where accuracy matters.
Reference Class Forecasting. Instead of estimating from scratch, compare the current project to similar past projects. "The last 5 API integrations took 2-4 weeks. This one is similar in scope. Estimate: 3 weeks." Best for: teams with historical data on similar work.
Monte Carlo Simulation. Feed historical data (how long past tasks actually took vs. estimates) into a simulation. Get probability distributions instead of point estimates. "There is an 80% chance we finish by March 15." Best for: portfolio-level planning and deadline confidence.
| Method | Best For | Team Size | Accuracy | Speed |
|---|---|---|---|---|
| Planning Poker | Sprint planning | 3-9 | High | Medium |
| T-Shirt Sizing | Roadmap planning | Any | Low-Medium | Fast |
| Three-Point (PERT) | Critical projects | Any | High | Slow |
| Reference Class | Repeated work types | Any | High | Fast |
| Monte Carlo | Portfolio planning | Large | Very High | Slow (setup) |
Research consistently shows that software projects are underestimated. Here is what the data says:
The Standish Group (CHAOS Report) found that only 29% of software projects are delivered on time and on budget. 52% are "challenged" (late, over budget, or reduced scope). 19% fail outright.
Steve McConnell (author of Software Estimation: Demystifying the Black Art) found that the average software project overruns its estimate by 28%. Projects estimated early in the lifecycle overrun by 100% or more.
The cone of uncertainty shows that early estimates can be off by 4x in either direction. At the start of a project, a task estimated at 2 weeks could actually take anywhere from 0.5 to 8 weeks. This narrows as the project progresses and unknowns are resolved.
Why does this happen?
"Better estimation tools solve estimation problems." Tools help (Jira story point tracking, for example), but the problem isn't tools. It's that humans are bad at predicting the future, especially for novel work. Better tools won't fix the bias.
"We should estimate quarterly work now." Estimates get worse the further in the future they are. Estimate the current sprint or month accurately. Don't estimate the full quarter in detail (you'll be wrong). Have rough estimates for the quarter, then refine as work approaches.
"Accurate estimation means we can commit to anything." No. Even accurate estimates have uncertainty. You can commit to "we'll ship this feature in the next 8 weeks with high confidence." You can't commit to "we'll ship this feature in exactly 8 weeks, not 8.2 weeks."
Q: How do we estimate when requirements are unclear?
A: You can't estimate accurately with unclear requirements. First, clarify. "Here's what we know. Here's what we're unsure about. Let's align on what we're actually building." Then estimate. If you estimate before clarifying, your estimate is fiction.
Q: Should we commit to aggressive estimates to show we're fast?
A: No. Aggressive estimates lead to missed commitments. It's better to estimate conservatively and deliver early than to estimate aggressively and slip. You build more credibility by being reliable than by appearing fast.
Q: What if we're consistently underestimating?
A: That's a symptom. Possible causes: you're including too much scope, your codebase is slower than you think, you're not including testing/review/deployment time, or you have too many dependencies on other teams. Investigate which.
Keep reading
Related resources