Use Case
Transform technical debt from a vague concern into a managed resource. Glue surfaces which debt is actually slowing your team down and what it would cost to fix.
At Salesken, we had a 'tech debt' label in Jira with 200+ tickets. When our board asked how much technical debt we had, I couldn't give them a number. That experience taught me that unmeasured debt is invisible debt.
Every engineering team knows they have technical debt. Almost no engineering team can tell leadership where the debt is, what it's costing in delivery speed, or when it needs to be addressed before it becomes a crisis. Debt lives in engineers' heads as accumulated frustration - "that module is a mess," "we should really refactor that," "this code is making us slow." But it doesn't live in dashboards that drive business decisions. Without visibility, debt compounds in silence until it's too late.
Technical debt is insidious because it doesn't look like a problem until it is one. A poorly structured authentication module works fine for a year. Then it doesn't. A data model that was fine for 100K users breaks at 10M. Code that was acceptable when written becomes unmaintainable when it's touched by the fifth engineer who didn't understand the original design. The debt doesn't announce itself - it just slows teams down until they're stuck.
Engineering managers and CTOs face a constant challenge: debt is real, but it's invisible to leadership. You know the authentication system is fragile. You know the payments module has been retrofitted so many times that adding features now requires tracing through three layers of legacy patterns. You know the database queries are inefficient and adding new features will require optimization work. But when you tell leadership "we need to address technical debt," you're asking for time and resources based on a feeling, not data.
The consequences are severe. Teams spend weeks on work that should take days because the codebase is fighting them. New engineers take three months to become productive instead of six weeks because they're wrestling with poorly structured systems. Features that should ship in a sprint ship in three because the underlying code is hard to work with. Production incidents happen in code that hasn't been touched in two years because it was never refactored. Delivery velocity declines not because teams are working harder, but because they're fighting the codebase.
Most teams respond by scheduling "tech debt sprints." They block a week or two to refactor. Sometimes it helps. Often it doesn't move the needle because the chosen debt wasn't actually the bottleneck. Sometimes teams refactor the wrong thing and still can't ship faster. The sprint looks good in process terms - "we took time to improve code quality" - but if it doesn't accelerate shipping the features customers care about, it doesn't matter.
Some teams use static analysis tools - SonarQube, Veracode, code complexity metrics. These tools measure code properties: cyclomatic complexity, test coverage, code duplication. All useful data. But code metrics don't answer the business question: "which debt is actually slowing us down?" A module might have high complexity and high test coverage and not be a bottleneck. Another module might have lower complexity but be rewritten constantly because it's at the boundary between systems and nobody understands it. The metrics don't tell you which debt actually matters.
Other teams track tech debt in Jira - tickets for refactoring work, architectural improvements, infrastructure upgrades. This creates a list, but not a priority. The list grows faster than it shrinks. Jira tech debt tickets compete with feature work and usually lose. Without visibility into the impact of each debt item, it's hard to justify prioritizing it over new features that customers are asking for.
Some teams try to measure debt by looking at incident rates. Code that causes frequent production incidents is probably fragile. But this only works for the most obvious debt - code that's so bad it breaks constantly. Plenty of debt never causes incidents; it just slows development. A module might be poorly structured but never fail because it's not in a critical path. It's still slowing the team down, just in ways that don't appear as incidents.
The deeper problem is that technical debt isn't a single thing. It's a symptom. The real question is: "What's slowing us down and why?" The answer might be code complexity, but it might also be unclear architecture, missing test coverage, infrastructure that hasn't kept pace with growth, or patterns that worked for version 1 but not version 5. Without visibility into the actual bottlenecks in your codebase, you're choosing debt to refactor blindly.
Glue connects code metrics to development impact. Instead of just showing complexity and coverage, Glue helps engineering managers ask: "Which modules have high complexity and low test coverage?" ( - the highest-risk combination) "Which parts of the codebase are most frequently involved in production incidents?" ( - debt that's actively breaking things) "Which modules do engineers complain about most?" ( - captured through git history and code review patterns) "What would change in delivery velocity if we refactored [this module]?"
A typical workflow starts with a CTO or engineering manager asking Glue to surface the top technical debt hotspots. Glue might return: "Your authentication module has high complexity (cyclomatic complexity > 50), below-average test coverage (62%), and has been involved in 8 production incidents over the past six months. It's also been modified in 45 commits in the last quarter - it's a churn center." That's not just a code metric. That's evidence of debt that matters.
Glue could surface a second hotspot: "Your user database schema has been extended 12 times in the past two years through migration scripts. It now has 47 columns, many with unclear purpose. New feature development touching this schema takes 40% longer than equivalent features in other data stores. Three of the last five bugs were related to schema assumptions." Again - this is debt with measurable impact.
A third hotspot might be: "Your notification system has three different implementations ( - email, SMS, push notifications) using three different queuing patterns. When you add a notification channel, you're adding code in three places. The inconsistency has caused three bugs where one channel failed silently while others worked. Unifying this system would eliminate a class of future bugs and reduce code required for new channels by 60%."
For each hotspot, an engineering manager can now ask: "What would it take to refactor this?" Glue shows the scope - lines of code affected, number of modules that depend on it, existing test coverage that could make refactoring safer. For the authentication module, Glue might say: "This would require changes to 18 modules that depend on it. You have some test coverage but not complete. A safe refactoring would take approximately 3-4 weeks of engineering time."
The manager can then calculate impact. The authentication module is a bottleneck for 40% of new feature work. Refactoring would take 3 weeks and free up delivery capacity. Over a quarter, that's estimated to deliver an extra one to two features. Is that worth 3 weeks of time? Usually yes. For the database schema, the math is similar - if every feature touching that schema takes 40% longer, and you work on schema-touching features regularly, unifying the schema has high ROI.
Specific Glue queries look like: "Which modules have been most recently refactored and which haven't been touched in more than two years?" ( - neglected modules are often debt-heavy) "What's the estimated effort to add test coverage to [module]?" "If we standardized [pattern] across the codebase, how many files would change?" "Which modules are dependencies for the most other modules?" ( - core modules that are fragile are higher risk) "What are the most common code smells in [system]?"
Technical debt becomes a managed resource, not a background problem. Instead of vague "we need to address debt" conversations, leadership and engineering discuss specific debt items with estimated effort and measurable impact. Some debt gets prioritized and fixed. Other debt gets accepted as a known limitation - "we know this module is complex, but it's not on our path, so we're willing to live with it." Other debt gets prevented - engineers understand which patterns create debt and avoid them.
Delivery velocity stabilizes or improves. The same team ships more features in the same time because they're not fighting the codebase as much. Onboarding new engineers gets faster because codebases with less debt are easier to understand. Production incidents decrease in systems that were refactored because the fragile code is now structured properly. The team ships faster not because they're working harder, but because the system is fighting them less.
Q: Does Glue recommend which debt to fix? A: Glue surfaces which debt exists and what's slowing you down. The decision about what to fix is a human one, based on your business priorities and engineering capacity. But Glue gives you the data to make that decision.
Q: What if we can't afford to fix any debt? A: Understanding debt still matters. Glue shows you the trade-offs. You might decide not to fix the authentication module, but you'll know that certain changes are riskier or slower because of that debt. You can route work around it or accept slower delivery on debt-heavy modules.
Q: Can Glue tell me if a refactoring will actually help? A: Glue can show you which modules are bottlenecks and estimate the scope of refactoring work. Whether a refactoring actually accelerates delivery depends on whether that module was actually the bottleneck. Glue helps you choose the right modules to refactor.
Q: How do you measure debt impact on delivery velocity? A: Glue shows multiple signals: test coverage, modification frequency, incident involvement, dependency count. These together paint a picture of which modules are friction points. Delivery velocity measurements over time show whether refactoring actually improved things.
Keep reading
Related resources