Glossary
Convert technical debt into measurable signals: incident correlation, change latency, and business impact. Learn how to prioritize debt remediation.
At Salesken, we had a 'tech debt' label in Jira with 200+ tickets. When our board asked how much technical debt we had, I couldn't give them a number. That experience taught me that unmeasured debt is invisible debt.
Measuring technical debt is the practice of identifying, quantifying, and tracking technical debt so leadership understands the scope of the problem and can make decisions about whether and when to pay it down.
Technical debt is often invisible. Code is slow, but no one measured how slow. A system is fragile, but no one quantified the fragility. Tests are missing, but no one tracked how much coverage was lost. Without measurement, it's hard to argue that technical debt is a problem worth solving.
Measuring technical debt serves three purposes:
First, it makes the problem visible to leadership. Many leaders don't understand why engineering wants to spend time on technical debt work. They see it as "making the code cleaner" rather than "enabling faster shipping." Measurement changes that. "We have 2 million lines of code with 15% test coverage. Our competitors have 60%. That's why our bug rate is higher and our speed is slower." Now it's concrete.
Second, it enables prioritization. If you've measured technical debt in three systems, and one system shows very low test coverage, high coupling, and high error rates, you know where to start paying it down.
Third, it enables tracking progress. If you say "we're going to reduce technical debt," how do you know if you're succeeding? Measurement tells you: "We were at 20% test coverage six months ago, now we're at 45%." That's progress you can demonstrate.
Technical debt is multidimensional. You can't measure it with a single number. Different dimensions:
Test coverage: What percentage of code is tested? 0-100%.
Code complexity: How complex is the codebase? Measured by cyclomatic complexity (how many paths through the code), or simpler proxies like "how many functions are longer than 100 lines?"
Coupling: How interdependent are modules? If a change to module A breaks module B, they're tightly coupled.
Age: How old is the code? Old code (especially unchanged for 2+ years) is often outdated or brittle.
Churn: How often is code being changed? High churn indicates instability. Low churn might indicate abandonment.
Error rate: How often does this system fail in production?
Iteration time: How long does it take to make and test a change?
You don't need to measure everything. Start with 2-3 dimensions that matter most for your product:
Test coverage: Most programming languages have test coverage tools. Run them. Get the percentage.
Code complexity: Tools exist. Sonarqube, pylint (Python), eslint (JavaScript). They give complexity scores.
Coupling: Harder to measure automatically. Proxy: "How many modules touch this module? How many does this module touch?" High numbers = high coupling.
Code age: Query your version control system. "When was the last change to this file?" Files unchanged for 2+ years are worth reviewing.
Error rate: Your monitoring system likely already tracks this. What's the error rate for each major system?
Iteration time: Measure cycle time on small bug fixes. "How long from issue creation to fix shipped?" If fast, iteration is easy. If slow, something's in the way.
When presenting technical debt to leadership:
Compare to competitors or benchmarks. "Industry average test coverage for products like ours is 65%. We're at 20%." Now there's context.
Connect to business outcomes. "Our error rate is 2x the industry benchmark. That's costing us customer churn and support burden."
Show improvement over time. "Six months ago we were at 15% coverage. Now we're at 30%. At this pace we'll hit 60% in 18 months."
Be honest about trade-offs. "We can ship faster if we take on tech debt. We'll be slower in 6 months, but we'll hit this market window." That's a legitimate choice when made consciously.
Metrics can backfire if not used carefully.
If you measure "test coverage" and make engineers responsible for hitting 80% coverage, you'll get 80% coverage. You might also get useless tests that just check that code runs, not that code is correct.
If you measure "code complexity" and penalize functions longer than 100 lines, you'll get functions shorter than 100 lines. You might also get functions that are harder to understand because they're artificially broken apart.
Metrics are useful for trending (is test coverage improving?) and comparing (our coverage vs. competitors'?), but dangerous as absolute targets. Use them to understand, not to optimize the metric.
"We can't afford to measure technical debt." Measurement takes time, but not much. An afternoon getting test coverage numbers, another afternoon getting complexity numbers. That's $1000-2000 of effort. The benefit: understanding whether to spend $50k on refactoring.
"Technical debt measurement is a tech problem, not a business problem." Wrong. Business leaders need to understand technical debt to make allocation decisions. Engineers measure; PMs and leaders interpret and decide.
"Higher metrics are always better." Not necessarily. 100% test coverage with useless tests is worse than 70% coverage with meaningful tests. The goal is meaningful measurement, not maximizing the metric.
Q: Should we measure technical debt across the whole codebase or focus on specific systems?
A: Both. Start with systems that matter most to your business. If your payment system is unreliable, measure its technical debt first. But also aggregate across the codebase to understand overall health.
Q: How often should we measure technical debt?
A: Monthly or quarterly. Monthly is better if you're actively working to reduce it (you want to see progress). Quarterly is fine for tracking trends.
Q: How do we measure technical debt in legacy systems?
A: Same way. What's the test coverage? How complex is the code? How often is it changing? The numbers might be worse (old code often has low coverage and high complexity), but that's useful information.
Keep reading
Related resources