Introduction: The Dark Matter Problem
At Salesken, we had a "tech debt" label in Jira with 200+ tickets. When our board asked how much technical debt we had, I couldn't give them a number. Not a dollar figure, not a severity score, not even a rough estimate of engineering hours owed. I knew it was bad — our deploy times were getting longer, our incident rate was climbing — but I couldn't quantify it. That's when I started building a measurement framework.
Technical debt is everywhere. Every engineering team knows it exists. The legacy authentication system that's been patched a dozen times. The monolith that takes 15 minutes to spin up locally. The test suite that fails intermittently. The three different logging systems running in parallel.
Yet most teams can't answer this question: How much technical debt do we actually have?
Unlike financial debt, which you can measure in dollars with precision, technical debt exists in a shadow realm. It's felt in slow deployments, in developer frustration, in production incidents at 2 AM. But without measurement, it's impossible to prioritize, impossible to communicate its true cost to leadership, and impossible to know if your debt-reduction efforts are working.
This is the core problem technical debt metrics solve. They transform vague engineering concerns into quantifiable data that engineering leaders can track, trend, and communicate to non-technical stakeholders.
In this guide, we'll explore the five essential categories of technical debt metrics, build a practical framework you can present to leadership, and show you how modern AI-driven tools are automating debt detection and prioritization at scale.
The 5 Categories of Technical Debt Metrics
Technical debt isn't one-dimensional. It manifests across multiple layers of your codebase and systems. To measure it comprehensively, you need metrics across five distinct categories.
1. Code-Level Metrics: The Foundation
Code-level metrics give you a direct view into the quality and maintainability of your codebase. These are the metrics that automated tools can measure today.
Cyclomatic Complexity Cyclomatic complexity measures the number of independent paths through your code. High complexity (typically >10 per function) indicates code that's hard to understand, test, and maintain. A function with 20 different decision branches is a red flag—it's likely doing too much and represents accumulated technical debt from feature additions over time.
Code Duplication Duplicated code is a form of technical debt because every change to a duplicated pattern requires edits in multiple places. Tools like SonarQube flag duplicate code blocks, typically identifying debt when duplication exceeds 3-5% of your codebase. For a 500,000-line codebase, that's 15,000-25,000 lines of redundant code consuming maintenance effort.
Code Smells Code smells aren't bugs—they're patterns that suggest underlying design problems. Long methods, God classes, unused parameters, and deeply nested conditionals are all code smells. SonarQube detects hundreds of these patterns. Tracking the number of active code smells in your system gives you a quantifiable debt measurement.
SonarQube Score & Security Rating SonarQube's overall rating (A, B, C, D, F) provides a composite view of code quality. More importantly, it segments issues by severity—critical, major, minor—allowing you to focus on debt that actually impacts stability and security. The Security rating separately tracks vulnerability debt.
Test Coverage Low test coverage (below 60%) indicates code whose behavior isn't validated, making changes risky. This is debt—every refactoring becomes a gamble. Conversely, coverage above 80% for critical paths is a sign of health.
Key Takeaway: These metrics are easiest to measure because tools automate them. But they only tell half the story. A codebase can have good code quality metrics while still being slow to deploy.
2. Delivery Impact Metrics: How Tech Debt Slows Everything Down
Code quality metrics are important, but they only matter if they correlate with delivery speed. Delivery impact metrics measure the tangible effect of technical debt on your team's ability to ship features.
Cycle Time Cycle time measures how long it takes a feature to move from "in progress" to deployed. A high-debt codebase will have longer cycle times because:
- Setup and build times are slower
- Testing requires manual verification (low automation due to fragile tests)
- Deployments are riskier (no confidence from bad tests)
- Small changes require extensive regression testing
Track cycle time by component or team. If your "legacy auth" component has a 14-day average cycle time while your newer services average 3 days, that's quantified technical debt.
Deployment Frequency High-debt systems deploy less frequently because deployments are risky. A team shipping 10 times per week has less debt than one shipping twice per month. Deployment frequency is both a cause and effect—less frequently deployed code accumulates more technical debt because it's harder to keep up with security patches and dependency updates.
Lead Time for Changes This is the time from code commit to production. High debt increases lead time because the path to production is littered with manual gates, approvals, and testing bottlenecks.
Bug Escape Rate Technical debt correlates with production bugs. Track how many bugs make it to production per release. High-debt systems have high escape rates because test coverage is poor and manual testing is inconsistent. When you pay down debt (improve tests, refactor complexity), this number should decline.
Mean Time to Recovery (MTTR) When incidents happen in high-debt systems, they take longer to fix because the code is hard to understand. MTTR is another proxy for debt—if it takes your team 4 hours to find and fix a bug in the payment system, but 20 minutes in a well-architected service, that's quantified debt.
Key Takeaway: Delivery metrics are the bridge between code quality and business impact. They show that technical debt isn't just a code quality problem—it's a velocity problem.
3. Maintenance Burden Metrics: The Tax on Your Sprint Capacity
Every sprint has a finite amount of capacity. Some of that capacity goes to building new features. The rest goes to maintenance, incident response, and paying down debt. Tracking this allocation reveals the true cost of technical debt.
Percentage of Sprint Capacity Spent on Maintenance A healthy ratio is roughly 20% maintenance, 80% features. High-debt teams might see 50% or more of their sprint consumed by bugs, incident response, and refactoring just to keep the lights on. This is a leadership-visible metric—when the CFO hears that 40% of engineering capacity is absorbed by maintaining legacy code, the business case for debt paydown becomes obvious.
Calculate this by tracking story points or hours: (maintenance work hours / total sprint hours) × 100.
Incident Frequency by System Some parts of your codebase are incident machines. The payment processing system with 5 incidents per quarter, vs. the reporting service with none—that's a clear debt signal. High-debt components have higher incident frequency because they're fragile, poorly understood, and lack safeguards.
Mean Time Between Failures (MTBF) The inverse of incident frequency. If a critical system has an MTBF of 5 days, that's a red flag that technical debt has accumulated to the point of affecting reliability.
Knowledge Concentration When only one person understands how a system works, that's a specific kind of technical debt—the "bus factor" or knowledge debt. Track the number of team members who can troubleshoot and modify each critical system. Low numbers indicate debt that will materialize as an emergency when that person leaves.
Key Takeaway: Maintenance burden metrics are deeply tied to team capacity and morale. High burden leads to burnout. This metric resonates with team leads and managers because it directly affects their ability to hit roadmap commitments.
4. Business Impact Metrics: Translating Tech Debt into Dollars
Engineering leaders speak the language of metrics. Finance and product speak the language of dollars and customer impact. Business impact metrics bridge that gap.
Engineering Time Cost Calculate the annual cost of technical debt by estimating how much engineering capacity it consumes. If 30% of your $5M annual engineering budget is spent on maintenance and debt paydown instead of features, your technical debt is costing $1.5M annually in opportunity cost.
This is powerful because it makes debt abstract no more—it's a line item in your P&L.
Opportunity Cost Beyond direct maintenance hours, quantify what your team could be building instead. If your team spent 1,000 hours this quarter paying down debt instead of building features, and each new feature takes 200 hours of development, that's 5 features you didn't ship.
Value those features: if the average feature drives $50K in ARR, then technical debt cost you $250K in lost revenue this quarter.
Customer Churn from Reliability High-debt systems produce reliability issues. Customers experience slow APIs, downtime, and lost data. Some percentage of your churn is directly attributable to reliability—track this by analyzing exit surveys and churn data. When customers mention "your system is too unreliable" in exit interviews, quantify that in terms of your technical debt problem.
Hiring & Retention Impact Engineers don't want to work in high-debt codebases. They're frustrating, career-limiting, and burn people out. Track whether your tech debt problem is affecting recruitment and retention. If you have a 25% annual attrition rate in your core platform team, but 5% elsewhere, that's a debt-driven cost (in recruitment, onboarding, and lost productivity).
Key Takeaway: Business impact metrics are the hook that gets funding for debt paydown. They translate engineering concerns into language that executives understand and act on.
5. Trend Metrics: Is Your Debt Growing or Shrinking?
A snapshot of your debt today is less valuable than understanding the direction. Trend metrics show whether you're getting healthier or sicker.
Debt Ratio Over Time Plot your overall debt ratio (code smells + duplications + coverage gaps, normalized) month over month. Are you trending down or up? A flat line suggests you're maintaining status quo—code accumulates debt at the same rate you pay it down, which is stasis.
Velocity of Debt Accumulation Compare the rate at which new technical debt is being introduced (new code smells, new duplications) vs. the rate at which you're eliminating it. If you're adding 50 new code smells per month but eliminating 30, your net debt is growing by 20 per month. That's unsustainable.
Component Health Trends Some components improve over time (their debt ratio drops), while others degrade. This reveals which teams are prioritizing quality and which are purely in feature-shipping mode.
Dependency Age & Outdatedness Tracking how long your critical dependencies lag behind the latest versions is a form of technical debt. If your Node runtime is 2 major versions behind, that's debt—security fixes and performance improvements are inaccessible to you. This metric trends upward over time as new versions are released, so consistent upgrades keep it low.
Key Takeaway: Trends are where accountability happens. You can't manage what you don't measure over time. Trends allow teams to set targets ("reduce code complexity by 15% this quarter") and track progress.
Building a Tech Debt Scorecard: A Framework for Leadership
Raw metrics are noise without context. A tech debt scorecard synthesizes these metrics into a single, understandable view that engineering teams can present to stakeholders quarterly.
Here's a practical framework:
Q1 2026 - Technical Debt Scorecard
==================================
Overall Debt Health: 6/10 (declining, was 7/10 in Q4)
CODE QUALITY (Weight: 25%)
- Cyclomatic Complexity: High Risk (avg 12.5, target <10)
- Code Duplication: 7.2% (acceptable <5%, trend: rising)
- Test Coverage: 72% (acceptable, target >80%)
- SonarQube Rating: B (target: A)
Score: 6/10
DELIVERY SPEED (Weight: 35%)
- Average Cycle Time: 8 days (target <5 days)
- Deployment Frequency: 8x/week (target >10x/week)
- Bug Escape Rate: 2.3% (target <1%)
- MTTR for Critical Issues: 2.8 hours (target <1 hour)
Score: 5/10
STABILITY & RELIABILITY (Weight: 25%)
- Incident Frequency (monthly): 4.2 incidents (target <2)
- MTBF for Platform: 6.8 days (target >30 days)
- Unplanned Downtime: 2.3 hours/month (target <30 min)
Score: 4/10
TEAM IMPACT (Weight: 15%)
- Sprint Capacity on Maintenance: 42% (target <25%)
- Knowledge Concentration (auth system): 1.2 people (target >2)
- Team Satisfaction (debt question): 4.2/10 (declining)
Score: 5/10
WEIGHTED OVERALL SCORE: 5.4/10
Priority Areas:
1. Reduce cycle time (biggest delivery drag)
2. Improve test coverage & reliability
3. Knowledge transfer on auth system
4. Reduce maintenance burden
Target Score Q2 2026: 6.2/10
This scorecard:
- Makes debt visible: No hand-waving, just numbers
- Combines multiple perspectives: Code quality matters, but delivery speed matters more
- Weights what matters: You can adjust the weights based on your priorities
- Enables accountability: Teams see their score and month-to-month progress
- Drives prioritization: The priority areas naturally emerge from the gaps
Communicating Tech Debt to Non-Technical Stakeholders
Engineering teams understand why code complexity matters. CFOs and product leaders don't. Here's how to translate technical debt metrics into language that resonates with business stakeholders.
The "Interest Rate" Analogy
Technical debt, like financial debt, has an interest rate. Every sprint, you "pay interest" on your debt through:
- Slower feature velocity
- More incidents and firefighting
- Longer onboarding for new engineers
- Higher stress and burnout on your team
You can quantify this: "Our codebase is costing us $1.5M in annual lost engineering capacity (the interest rate). We could either pay this interest forever, or we could invest $300K now to pay down the principal and reduce that annual cost to $400K within two years."
This framing makes the business case clear.
Visual Dashboards
Show, don't tell. A dashboard showing:
- Cycle time trending downward (good) or upward (bad)
- Incident frequency over time
- Test coverage over time
- Team capacity allocation (features vs. maintenance)
These visuals resonate with non-technical stakeholders because they don't require technical knowledge to understand direction.
Quarterly Debt Reports
Present tech debt in the same way you present quarterly business metrics:
Q1 2026 Tech Debt Report
- Overall health: 6/10 (was 7/10, declining)
- Key risks: Cycle time has increased 40% YoY; payment system reliability declining
- Investment required: 2 engineers for 6 weeks to stabilize payment service
- Expected ROI: 25% reduction in incidents; 15% faster cycle time
- Timeline: 8 weeks to measurable improvement
This structure is familiar to business stakeholders. It sets expectations, ties investment to outcomes, and demonstrates accountability.
The "Debt Budget" Approach
Some teams allocate a percentage of capacity as the "debt budget"—20% of sprint capacity is designated for debt paydown, security upgrades, and maintenance, while 80% goes to features. This is transparent and prevents the "all maintenance, no progress" trap.
Tech Debt Reduction Strategies Informed by Data
With metrics in place, you can now prioritize what to pay down. Most teams make this decision based on what's annoying or what's oldest. Data-driven teams prioritize by impact.
1. Prioritize by Delivery Impact, Not by Size
A complex, poorly tested internal admin tool might have high code complexity, but if it's not on the critical path, paying it down won't improve velocity. Instead, focus on high-complexity code in your core systems—the payment processor, the API layer, the authentication service.
Use delivery impact metrics to identify which components are slowing you down, then improve those.
2. Fix the Highest-Incident Systems First
Systems that produce the most incidents are the right target for debt paydown. If the billing service has 8 incidents per quarter while the analytics pipeline has none, invest in billing first.
3. Create Quarterly Debt-Paydown Initiatives
Rather than sprinkling debt work across the backlog, dedicate specific initiatives: "Q2 initiative: Reduce payment service incident rate from 4/month to <1/month." This creates focus and accountability.
4. Measure the Impact of Paydown
Once you've paid down debt in a component, did cycle time improve? Did incidents drop? Did team satisfaction increase? Measure before and after to show ROI and build the business case for future investment.
How AI Agents Automate Tech Debt Detection and Prioritization
Manually tracking all these metrics is labor-intensive. Modern AI-driven platforms are automating detection and prioritization.
Agentic tools can now:
- Scan your codebase continuously to detect code complexity, duplication, and smells—without relying on developers to run tools
- Correlate metrics automatically: Identify which code quality issues correlate with high incident rates or slow cycle times
- Prioritize autonomously: Given your business goals, suggest which components to pay down first
- Generate actionable reports: Produce scorecards and dashboards without manual data compilation
- Track trends over time: Compare month-to-month progress and alert when debt is accumulating too quickly
This automation is critical because teams that measure technical debt in real-time can react faster to prevent it from becoming critical.
Introducing Glue: Making Tech Debt Measurable and Actionable
Measuring technical debt across the five categories we've outlined requires integrating multiple data sources: your codebase, your CI/CD pipeline, your incident tracking system, your project management tools.
Glue is an agentic product OS built specifically to help engineering teams make sense of this data and act on it. Rather than forcing you to move data between tools, Glue agents work across your existing engineering stack—connecting your GitHub repositories, incident trackers, and project management systems to surface the technical debt that matters most.
With Glue, engineering leaders can:
- Build automated tech debt scorecards that update weekly, surfacing the metrics that matter without manual compilation
- Identify high-impact debt automatically: Which code quality issues are actually slowing you down? Which incidents cluster around specific high-complexity components? Glue's agents find these correlations
- Prioritize debt paydown by impact: Given your business goals (ship faster, improve reliability, hire/retain better), Glue recommends which debt to pay down first and in what order
- Track progress over time: Visualize how your tech debt health improves as you pay down debt, with clear before/after metrics
Instead of technical debt metrics remaining hidden in spreadsheets and Jira tickets, Glue's agents continuously monitor, synthesize, and present them in a way that engineering leaders and stakeholders can understand and act on.
The result: technical debt moves from the "dark matter" of engineering—something everyone feels but no one can measure—to a measurable, manageable, prioritized component of your engineering roadmap.
Conclusion: From Invisible to Inevitable
Technical debt doesn't disappear when you ignore it. It compounds. It slows you down. It burns out your team.
But with the right metrics—across code quality, delivery speed, maintenance burden, business impact, and trends—technical debt becomes visible. And once it's visible, it becomes manageable.
Start with one category of metrics. Code quality metrics are easiest (SonarQube scores, test coverage). Then layer on delivery impact (cycle time, deployment frequency). Build a quarterly scorecard. Present it to leadership. And begin prioritizing debt paydown by data, not by gut feel.
The teams winning in today's competitive landscape aren't just shipping faster—they're measuring and managing the technical debt that slows everyone else down. They know, to the dollar, what their debt is costing them. And they have a plan to reduce it.
Your engineering team can do the same. Start measuring today.
Additional Resources
- SonarQube Documentation
- DORA Metrics & Deployment Frequency
- Measuring Code Complexity
- The True Cost of Technical Debt
Related Reading
- Technical Debt: The Complete Guide for Engineering Leaders
- Types of Technical Debt: A Classification That Actually Helps
- Code Refactoring: The Complete Guide to Improving Your Codebase
- DORA Metrics: The Complete Guide for Engineering Leaders
- Engineering Efficiency Metrics: The 12 Numbers That Actually Matter
- Code Quality Metrics: What Actually Matters