Use Case
Surface complexity before planning sessions. Estimate stories with full visibility into what the code actually looks like. Improve sprint commitment accuracy and delivery velocity.
Across three companies — Shiksha Infotech, UshaOm, and Salesken — I've learned that most engineering problems aren't technical. They're visibility problems.
Your team finished sprint planning with a full sprint of committed work. You estimated the features. You've done this process 20 times. You know how to estimate. But halfway through the sprint, you're running into problems that weren't visible during planning. A story touches a module that hasn't been worked on in two years and has architectural constraints nobody remembered. A feature depends on an API that's been deprecated. A "simple" change cascades through five systems because those systems are tightly coupled in ways planning didn't reveal.
The team misses the sprint commitment. By the time the retro happens, the explanation is usually the same: "We didn't know about the complexity in that module" or "That part of the system is messier than we thought." Not bad estimation - bad information. Engineering planning sessions produce commitments based on incomplete codebase visibility. Then the codebase surprises you.
Engineering planning sessions ( - sprint planning, quarterly planning, roadmap grooming) operate on two types of information. First, what's the work? Second, how long will it take? The estimation is solid - teams using story points, t-shirt sizing, or any consistent estimation method can be reasonably predictable. But estimation is only as good as the information it's based on.
Here's what happens: you pick a story - "add webhook retry logic." Someone asks "how long?" An engineer who has looked at the webhook code says "three points, simple addition." But that engineer only looked at the webhook service. They didn't trace through how webhooks are tested, they didn't check whether the retry logic exists elsewhere in the codebase, they didn't investigate whether adding retries affects the database schema, they didn't understand that the webhook service depends on an internal library that hasn't been updated in three years. The estimate is made without full context.
The story gets estimated at three points. It turns into eight points in execution. Not because the engineer is bad at estimating - but because the estimate was made without full codebase visibility. Multiply that across a sprint and you're suddenly delivering 60% of what you committed. Across a quarter and your roadmap is in chaos.
Another failure mode: you commit to a feature that sounds independent but has hidden dependencies. "Add export to CSV" seems simple until implementation reveals that the export needs to work with features that are in flux, or that the permissions model wasn't designed for export access patterns, or that database performance breaks at scale when exporting large datasets. Again - not bad engineers, bad planning information.
The cost compounds. Bad planning creates rushed work. Rushed work creates bugs. Bugs create incident response. Incident response consumes engineering capacity. The quarter ends with less shipped than planned. Leadership loses confidence in planning. Teams get more conservative in planning because they've been burned. Delivery velocity appears to decline not because teams are less skilled but because planning became disconnected from reality.
For managers and CTOs, engineering planning is a communication bottleneck. You're supposed to commit to deliverables. But your commitments are based on estimates that could be off by 2x in either direction. You can't tell leadership what's actually going to get done because you don't know.
Most teams try to address this through better estimation frameworks. Agile scaled poorly in part because story points obscure the information that matters. A feature might be "five points" for the happy path and "20 points" if you account for error handling, but estimates average them, and then you're shocked when the error handling turns out to be complex.
Other teams try "spike" tickets - dedicated investigation time before estimation. "Let's spend a day figuring out how complex this really is, then estimate." This works but adds overhead to every planning cycle. And spikes are temporal - they give you information about complexity on that specific day. A week later you might discover something new. Or the engineer who did the spike gets pulled to fires and the knowledge is gone.
Some teams try to improve planning by breaking stories smaller. "The issue is that we're underestimating large features. Let's break everything into smaller pieces and estimate those." This helps with visibility but it also increases planning overhead - more stories, more estimation, more discussion. And small stories that are part of a large feature still have dependency surprises.
Some teams accept planning uncertainty as inevitable and buffer for it. They estimate lower in terms of calendar commitment and higher in terms of story points, building in safety margin. This is honest but inefficient - you're reserving capacity for surprises that specific codebase knowledge could prevent.
The deeper problem: planning sessions don't have access to systematic codebase information. Estimates are made by engineers with partial information and hope. You're not intentionally ignoring complexity - you're just not aware of it because it's not visible in the story description.
Glue surfaces codebase complexity BEFORE planning sessions happen. Instead of estimating blind, engineers estimate with full visibility into what the code actually looks like. The workflow is different from traditional planning - it's planning with context.
A typical preparation workflow starts before the sprint planning meeting. Engineering lead takes the candidates for the sprint - the top 10-15 stories. Instead of going straight to estimation, they run them through Glue to surface complexity.
For a story like "add webhook retry logic," the engineering lead asks Glue: "What does our webhook implementation look like?" Glue shows the webhook service, what exists today, how retries might fit in, and what modules touch webhook code. The lead asks: "What would change if we added retry logic?" Glue shows affected tests, database considerations, and related retry patterns elsewhere in the codebase. The lead asks: "How much test coverage does the webhook module have?" Glue might show: "78% coverage overall, but retry paths are uncovered. You'd need to write new tests to ensure retry behavior is correct."
Now when the team sits down to estimate, there's context. An engineer says "I estimated three points, but Glue shows we don't have test coverage for retries and the database approach isn't clear." The team recalibrates - maybe it's five points, maybe it's eight. But the estimate is based on actual information, not assumptions.
For a feature like "export to CSV," the workflow is similar. Before planning, ask Glue: "What does our export and permissions infrastructure look like?" Glue maps what permission checks exist, what export methods are already in the codebase, and what would need to change. You ask: "At our largest dataset size, would the current export pattern work?" Glue might show query performance implications. You ask: "What other parts of the system interact with the data we're exporting?" Glue maps dependencies you hadn't considered.
Specific pre-planning queries look like: "Which modules are in the story and what's their test coverage?" "What dependencies exist between the modules touched by this story?" "Has this module been worked on recently or is it stale code?" "How many other parts of the codebase would be affected by the changes in this story?" "Are there deprecated systems this story depends on?" "What's the code complexity in the modules this story touches?"
The engineering lead brings Glue context to planning meetings. For high-uncertainty stories, the team has better information. For straightforward stories, Glue confirms that the estimate is reasonable. Estimation errors are smaller because estimates are made with better information.
A real example: a story "improve search performance" gets estimated at 5 points. Glue shows the search code is complex (cyclomatic complexity above 50), has below-average test coverage (62%), touches three different services, and performance was last optimized two years ago. The team revises to 8-13 points depending on how deep the optimization needs to go. That's very different from the initial estimate.
Another example: "add a new user role" seems complex. Glue shows the permissions system is well-structured, new roles have been added before, patterns are clear, and tests cover role changes. The team estimates 3 points instead of the 8 they were afraid of. They end up shipping faster than expected.
The result is better planning. Not perfect - surprises still happen. But surprises are smaller and less frequent because planning had better information. Sprint commitment rates improve. Teams ship closer to what they committed because the commitment was based on reality.
Sprint planning becomes shorter because engineers have pre-reviewed complexity via Glue. What used to be a two-hour planning meeting with lots of "hmm, I'm not sure how complex that is" becomes a one-hour meeting with better estimates. The team commits to what they actually deliver.
Quarterly planning becomes more predictable. Instead of committing to four major features and delivering 2.5, teams commit to features they understand and deliver close to 4. Leadership gets predictability. Teams get credibility for their estimates.
Velocity metrics become more meaningful. If your team's velocity is 40 points per sprint, that means something because estimates are based on visible complexity. You can predict next quarter with reasonable confidence. You can tell leadership "we can deliver six features" and have that be accurate.
Engineering teams spend less time on surprises and more time on shipping. A story doesn't hit a surprise two days in because the surprise was surfaced during planning. Work flows smoothly from planning to completion.
Q: Does this replace engineering judgment? A: No. Glue surfaces codebase complexity. Human judgment decides how much effort that complexity represents and how to address it. Both inputs matter.
Q: What if the team estimates conservatively anyway? A: Teams will always have uncertainty and buffer for it. But if the buffer is based on known complexity rather than unknown complexity, it's smaller and more justified. Teams can be more aggressive in estimates when they have visibility.
Q: How do you handle stories that depend on each other? A: Glue shows dependencies between modules. For stories that create dependencies (like API changes), Glue can help you understand the downstream impact and sequence the work properly.
Q: What if a story still goes sideways during execution? A: It will, sometimes. You can't prevent all surprises. But Glue prevents the most common source of estimation errors: invisible complexity in the codebase. That alone makes a big difference in planning accuracy.
Keep reading
Related resources