When someone says "we have technical debt," they might mean any of seven completely different problems. And each one requires a different fix.
Treating all technical debt as one undifferentiated category is like a doctor diagnosing every patient with "you're sick." Accurate, but useless for treatment. A team drowning in copy-pasted utility functions needs a different intervention than a team whose microservices architecture has turned into a distributed monolith.
I've spent fifteen years watching teams attempt to "pay down technical debt" without first classifying what kind of debt they actually have. The result is always the same: they spend a sprint refactoring code that wasn't the real bottleneck, declare the effort a failure, and go back to shipping features on top of a shaky foundation.
Here are the seven types of technical debt, how to identify each one, and what actually fixes it.
1. Code Debt: The Most Visible, Least Dangerous
Code debt is what most engineers picture when they hear "technical debt." Duplicated logic, overly complex functions, inconsistent naming conventions, magic numbers, and the general accumulation of shortcuts taken under deadline pressure.
What it looks like in practice: A 400-line function that handles authentication, authorization, session management, and logging all in one method. Nobody wants to touch it because changing one behavior risks breaking three others. A 30-engineer B2B SaaS team tracked their code debt using cyclomatic complexity metrics and found 12% of their codebase exceeded a complexity score of 25 - the threshold where bugs become statistically likely with every change.
How to detect it: Static analysis tools (SonarQube, ESLint with complexity rules, CodeClimate) catch the obvious cases. But the real signal is change frequency: files with high complexity AND high change frequency are the ones actually costing you velocity. Glue surfaces these hotspots by analyzing which complex files are being modified most often, so you focus refactoring effort where it will have the most impact.
How to fix it: Incremental refactoring during regular feature work - the "boy scout rule" of leaving code better than you found it. Dedicated refactoring sprints rarely work because they have no business-visible output and get deprioritized. Instead, budget 15-20% of every sprint for cleanup of the specific files you're already touching.
Related reading: Technical Debt: The Complete Guide
2. Architecture Debt: The Most Expensive, Hardest to See
Architecture debt is what accumulates when the system's structure no longer matches its requirements. The monolith that should have been decomposed two years ago. The synchronous API that needs to be event-driven. The single database that's become a bottleneck for three independent teams.
What it looks like in practice: A team of 40 engineers all deploying to the same monolith, with a deploy queue that runs 3+ hours. Feature work that should take a week takes three because it requires coordinating changes across modules that should be independent. Conway's Law in action - the architecture mirrors a team structure that no longer exists. (See our deep dive on Conway's Law for more on this.)
How to detect it: Architecture debt is invisible to code-level tools. You detect it through symptoms: deploy frequency declining despite team growth, cross-team PRs increasing as a percentage of total PRs, and incident post-mortems repeatedly citing "unexpected coupling" as a root cause. Codebase intelligence tools like Glue can map dependency chains and ownership boundaries, revealing where architectural boundaries have eroded.
How to fix it: Architecture debt requires dedicated investment - there's no incremental fix for "the database is a bottleneck." The key is framing it as a business decision, not a technical one. Calculate the velocity cost: if architectural friction is adding 30% to every feature estimate, that's the equivalent of losing 30% of your engineering headcount. That number gets leadership attention.
3. Test Debt: The Silent Confidence Killer
Test debt accumulates when test coverage is low, tests are brittle, or the test suite is so slow that engineers skip running it locally.
What it looks like in practice: A CI pipeline that takes 45 minutes, so engineers push directly to main and hope for the best. Integration tests that fail randomly due to timing issues, so the team ignores red builds. Manual QA as the primary quality gate, creating a bottleneck of 2-3 day review cycles. A 50-person engineering org found that 34% of their test failures were flaky - not catching real bugs, just adding noise. Engineers learned to ignore test results entirely.
How to detect it: Track three metrics: test coverage percentage by module (not just the overall number), CI pass rate (anything below 95% signals flakiness), and time-to-feedback (how long from push to test results). If your fastest feedback loop is longer than 10 minutes, engineers will stop using it.
How to fix it: Delete flaky tests first - they're worse than no tests because they erode trust. Then prioritize coverage on high-change, high-complexity files (the intersection is where bugs actually live). Invest in test infrastructure to get feedback under 5 minutes. This isn't glamorous work, but it's the highest-leverage investment most teams can make.
4. Dependency Debt: The Ticking Time Bomb
Dependency debt is the accumulation of outdated, unsupported, or vulnerable third-party libraries and frameworks. It's invisible until it's an emergency.
What it looks like in practice: A Node.js application running Express 4 when Express 5 has been out for a year. A Python project pinned to Django 3.2 because the upgrade requires rewriting all template tags. A React application on version 16 when version 19 has been released. Somewhere in your dependency tree, a transitive dependency has a known CVE that's been sitting in your security scanner for six months.
How to detect it: Dependabot, Renovate, and Snyk identify outdated and vulnerable dependencies automatically. But the harder question is prioritization: which outdated dependencies are actually causing problems? The ones on your critical path - in modules that change frequently and deploy to production - matter more than the ones in your internal admin tool.
How to fix it: Automated dependency update tools (Renovate, Dependabot) handle the easy cases. The hard cases - major version upgrades that require code changes - need scheduled maintenance windows. The best teams dedicate one sprint per quarter to dependency updates, treating them as infrastructure investment rather than optional cleanup.
5. Documentation Debt: The Knowledge That Walks Out the Door
Documentation debt isn't about missing README files. It's about the gap between what the system does and what anyone can learn about the system without asking the person who built it.
What it looks like in practice: A service that only one engineer understands. An onboarding process that takes 3 months because nothing is written down. Architecture decisions that live in Slack threads from 2023. An API that has 40 endpoints, but the docs describe 15 of them, and 6 of those descriptions are wrong.
How to detect it: Measure your bus factor - how many engineers need to leave before a critical system becomes unmaintainable? If the answer is 1 for any system, you have severe documentation debt. Also track onboarding time: if new engineers take more than 6 weeks to make meaningful contributions, documentation debt is a likely cause.
How to fix it: Traditional documentation (writing docs) is a losing battle because docs go stale the moment they're written. The more sustainable approach is making the codebase self-documenting through clear naming, architecture decision records (ADRs), and codebase intelligence tools. Glue addresses documentation debt directly by reading the actual codebase and answering questions about it in real time - eliminating the gap between what the code does and what people know about it. When anyone can ask "how does the payment service work?" and get an accurate answer derived from the code itself, the pressure to maintain separate documentation decreases dramatically.
Related reading: Technical Debt Tracking
6. Infrastructure Debt: The Stuff Nobody Wants to Own
Infrastructure debt accumulates in CI/CD pipelines, deployment processes, monitoring systems, and development environments. It's the "plumbing" that everyone depends on and nobody prioritizes.
What it looks like in practice: A deployment process that requires 14 manual steps and a prayer. A monitoring setup that alerts on everything (so the team ignores all alerts). Development environments that take a full day to set up because the setup script hasn't been updated since three services were added. A Docker Compose file with 23 services that requires 16 GB of RAM to run locally.
How to detect it: Track deployment frequency, lead time for changes, mean time to recovery, and change failure rate (the four DORA metrics). If any of these are degrading over time, infrastructure debt is likely the cause. Also track developer environment setup time - if it takes more than 2 hours to go from "git clone" to "running locally," you have infrastructure debt.
How to fix it: Invest in a platform team or dedicate 10-15% of each team's capacity to infrastructure improvements. Automate the deployment pipeline first (highest leverage), then the monitoring/alerting setup, then the development environment. Each automation removes a class of human error and reduces the "I'm the only one who knows how to deploy" bus factor.
7. Process Debt: The Organizational Friction Tax
Process debt is the least technical and most impactful type. It's the accumulation of outdated workflows, unnecessary approvals, manual steps that should be automated, and team structures that no longer match the work being done.
What it looks like in practice: A code review process that requires two senior engineer approvals for every PR, including one-line config changes. A release process that requires a CAB (Change Advisory Board) meeting every Thursday. A sprint planning ceremony that takes 4 hours because the backlog hasn't been groomed. Architectural decisions that require a committee review, so engineers implement workarounds to avoid triggering the review.
How to detect it: Ask your engineers: "What's the most frustrating process you deal with?" and "What process exists that nobody understands the reason for?" The answers will reveal process debt that leadership doesn't see. Also track the ratio of "process time" to "build time" for any feature - if a feature takes 2 days to build and 5 days to get through review, approval, and deployment, your process debt is 2.5x your build time.
How to fix it: Audit every process against its original purpose. Many processes were created to solve problems that no longer exist - the manual deploy checklist made sense before CI/CD, the architecture review board made sense before the team had senior engineers embedded in squads. Kill or simplify any process whose cost exceeds its protective value.
The Technical Debt Quadrant: Mapping Your Debt Portfolio
Not all technical debt is bad. Martin Fowler's Technical Debt Quadrant maps debt along two axes: deliberate vs. inadvertent, and reckless vs. prudent.
Deliberate and prudent: "We know this isn't ideal, but shipping now and refactoring next sprint is the right business decision." This is healthy debt - taken knowingly with a plan to repay.
Deliberate and reckless: "We don't have time for tests." This is the debt that compounds fastest and should be addressed immediately.
Inadvertent and prudent: "Now that we understand the domain better, we realize the architecture should have been different." This is learning debt - inevitable and best addressed through architecture evolution.
Inadvertent and reckless: "What's a design pattern?" This is competence debt - addressed through hiring, training, and code review standards.
Map each of your seven debt types into this quadrant. The result tells you which debt to pay down first (reckless-deliberate), which to plan for (prudent-deliberate), and which to address through capability building (inadvertent-reckless).
Related reading: The Real Dollar Cost of Technical Debt | Technical Debt Statistics 2026
Frequently Asked Questions
What are the main types of technical debt?
The seven types are: code debt (complexity, duplication), architecture debt (structural misalignment), test debt (low coverage, flaky tests), dependency debt (outdated libraries), documentation debt (knowledge gaps), infrastructure debt (manual processes, brittle pipelines), and process debt (organizational friction). Each type requires a different detection and remediation approach.
What is the most common type of technical debt?
Code debt is the most commonly discussed, but architecture debt and documentation debt are typically more costly. Code debt is visible and fixable with incremental refactoring. Architecture debt and documentation debt compound silently and require larger investments to address - they're the types most likely to slow entire teams rather than individual engineers.
How do you identify technical debt?
Different types require different detection methods. Code debt shows up in static analysis tools. Architecture debt shows up in deployment frequency and cross-team PR patterns. Test debt shows up in CI pass rates. Documentation debt shows up in onboarding times and bus factor analysis. Codebase intelligence tools like Glue can surface multiple types simultaneously by analyzing code patterns, change frequency, and ownership data.
Should you always pay off technical debt?
No. Deliberate, prudent debt - taken knowingly with a repayment plan - is a legitimate engineering strategy. The debt worth paying down first is reckless debt (taken without a plan) and debt on high-change code paths (where the interest compounds fastest). Debt in stable, rarely-changed modules can often be safely ignored.