Glossary
Technical debt assessments quantify accumulated code and architectural shortcuts. Learn how to prioritize debt by roadmap impact and remediation cost.
At Salesken, we had a 'tech debt' label in Jira with 200+ tickets. When our board asked how much technical debt we had, I couldn't give them a number. That experience taught me that unmeasured debt is invisible debt.
Technical debt assessment is the systematic evaluation of a codebase to identify, quantify, and understand the technical debt present. It answers: What technical debt do we have? Where is it concentrated? What's the cost? What should we fix first?
Assessment typically involves: code analysis (complexity metrics, test coverage, code duplication), architectural review (coupling, modularity, clarity of design), and stakeholder interviews (what feels slow or risky to the team?).
You can't manage what you don't understand. Technical debt that's invisible doesn't get prioritized. Assessment makes it visible.
Visibility enables decisions: "Do we have enough technical debt to warrant refactoring? Where should we focus?" Without assessment, technical debt is vague ("the code is messy") and political (different people blame different systems). With assessment, it's concrete and data-driven.
Assessment also enables prioritization. Not all technical debt is equal. Code that's slow but rarely changed can be left as-is. Code that's changed frequently and is hard to work with should be refactored first.
1. Define what you're assessing. Are you assessing: a single system, a microservice, the entire codebase? Scope matters. "Assess the entire codebase" might take months. "Assess the payment system" might take days.
2. Choose assessment dimensions. What matters most to your business? Usually: test coverage, code complexity, coupling, performance, and security. Choose 3-4 dimensions.
3. Collect data. For test coverage: run your test suite and measure. For code complexity: use tools like Sonarqube. For coupling: analyze dependencies. For performance: benchmark critical paths.
4. Establish baselines and benchmarks. "Our test coverage is 30%. Industry average for products like us is 60%." Baselines give context.
5. Interview the team. Ask engineers: "What systems are hard to work with? What changes are risky? What would make development faster?" Their answers often point to biggest pain points.
6. Synthesize findings. Combine data and qualitative feedback. Create a report: "Here's the state of technical debt. Here are the hot spots. Here's what fixing them would achieve."
Test Coverage: What percentage of code is tested? 0-100%. Higher is generally better, but quality matters more than quantity. A test that just checks "code runs" isn't useful.
Cyclomatic Complexity: How many decision paths does code have? Higher complexity = harder to understand, more bugs, harder to change. Functions > 20 lines with > 10 cyclomatic complexity are warning signs.
Code Duplication: What percentage of code is duplicated? Higher = harder to maintain (fix a bug in 5 places instead of 1). Duplication > 5% is worth addressing.
Coupling: How interdependent are modules? High coupling = changing one thing breaks others. Measure: how many other modules does each module touch? Lower is better.
Code Age: How old is the code? Code unchanged for 2+ years often becomes outdated or brittle. Not always bad (stable code is fine), but worth noting.
Error Rates: How often does this system fail? Error rates > industry benchmarks suggest problems.
Developer Velocity: How fast do you ship? Declining velocity suggests technical debt burden increasing.
Hot Spots: Systems with highest technical debt or biggest impact on velocity. These should be addressed first.
Risk Areas: Systems that are risky to change because of coupling, low test coverage, or high complexity.
Improvement Opportunities: Specific, actionable improvements. "Reduce cyclomatic complexity in the payment module" is actionable. "Clean up the codebase" is vague.
Quick Wins: Easy improvements with high payoff. "Add tests to the auth module (currently 10% coverage, could be 60% in 2 weeks)."
Long-term Initiatives: Major refactoring needed. "Decouple the payment system from the billing system." Might take months.
"Measuring everything." Measurement takes time. Measure what matters for your decisions. Don't collect data you won't act on.
"Treating all debt equally." Not all technical debt matters. Code that's slow but rarely changed can be left as-is. Code that's changed frequently and hard to work with should be prioritized.
"Using metrics without context." "We have 30% test coverage" is meaningless without context. Compared to what? Is that improving or declining? What's our goal?
"Assessing without asking the team." Tools give you metrics. Teams give you context. A 5-point complexity function might be manageable if everyone understands it. A 3-point function might be incomprehensible if it's poorly named. Ask the team.
"Not sharing findings." Assessment is only useful if findings get communicated and acted on. Assess, report, discuss, decide.
Ongoing: Continuous assessment through tools. Code complexity, coverage, etc. tracked automatically.
Periodic: Quarterly assessment by the team. "What's the state of technical debt? What changed? What should we prioritize?"
Triggered: Assessment when problems surface. "Velocity is declining. Let's assess why." "This system keeps causing incidents. Let's assess it."
Assessment is only valuable if it leads to action:
Without this cycle, assessment is just busy work.
"Assessment requires external consultants." Not necessarily. Your team knows the codebase. They can assess. External consultants add objectivity but aren't required.
"Good metrics prevent all technical debt." No. Metrics help you understand debt, but some debt is intentional (ship fast now, refactor later). Metrics inform decisions; they don't make decisions for you.
"Once we assess, technical debt is solved." Assessment identifies problems. Solving them takes time and effort. Assessment is step 1, not the whole solution.
Q: How often should we assess technical debt?
A: Depends on rate of change. Fast-moving teams: quarterly. Stable teams: annually. At minimum, assess when velocity changes significantly or when you're planning major refactoring.
Q: Should we fix all the debt we identify?
A: No. Some debt isn't worth fixing. Prioritize based on: impact on velocity, impact on reliability, and effort to fix. "This system has high debt but we rarely touch it." Leave it. "This system has high debt and we change it weekly." Fix it.
Q: How do we communicate assessment findings to non-technical stakeholders?
A: In business terms. "Our test coverage is low, which increases bug rates and support costs. Improving coverage would reduce bugs by X%." Connect technical metrics to business outcomes.
Keep reading
Related resources