Glueglue
AboutFor PMsFor EMsFor CTOsHow It Works
Log inTry It Free
Glueglue

The Product OS for engineering teams. Glue does the work. You make the calls.

Monitoring your codebase

Product

  • How It Works
  • Platform
  • Benefits
  • Demo
  • For PMs
  • For EMs
  • For CTOs

Resources

  • Blog
  • Guides
  • Glossary
  • Comparisons
  • Use Cases
  • Sprint Intelligence

Top Comparisons

  • Glue vs Jira
  • Glue vs Linear
  • Glue vs SonarQube
  • Glue vs Jellyfish
  • Glue vs LinearB
  • Glue vs Swarmia
  • Glue vs Sourcegraph

Company

  • About
  • Authors
  • Contact
AboutSupportPrivacyTerms

© 2026 Glue. All rights reserved.

Blog

Technical Debt Tracking: From "We Know It When We See It" to Measurable Signals

Track technical debt with structural, operational, and velocity signals. Measure debt continuously instead of one-time audits to manage engineering capacity.

AM

Arjun Mehta

Principal Engineer

February 23, 2026·9 min read
Technical Debt

Technical debt tracking is the practice of measuring and quantifying accumulated software quality issues using three signal categories: structural signals (cyclomatic complexity, code duplication, dependency depth), velocity signals (cycle time increases, PR review duration), and operational signals (change failure rate, incident frequency per module). Organizations that track technical debt with measurable signals rather than impressionistic labels can prioritize remediation by impact — focusing first on high-frequency, high-impact modules where a 25–30% change failure rate on critical paths indicates unacceptable risk. Continuous measurement with quarterly remediation cycles is more effective than periodic "debt quarters" that halt all feature work.

At Salesken, we had a 'tech debt' label in Jira with 200+ tickets. When our board asked how much technical debt we had, I couldn't give them a number. That experience taught me that unmeasured debt is invisible debt.

You don't know your technical debt because you're not measuring it. You have a sense of it - "the auth module is a mess," "frontend performance is bad" - but it's all impressionistic. If you actually measured it, you'd know exactly how much it's costing you.

This is the fundamental problem with how most engineering teams think about technical debt. It's treated like a feeling, a collective sense that something is wrong. So people say "we need a debt quarter" and leadership says "sure, once we ship this feature" and three years later the debt is worse. Not because people don't care. Because you can't manage what you don't measure.

Technical debt tracking is engineering telemetry. It's like ops monitoring - if you're not doing it, you're flying blind. And flying blind costs you 30 - 40% of your sprint capacity, though you probably don't know that because you're not measuring it.

The Framework: Three Categories of Debt Signals

The framework that works is this: measure three categories of debt signals. Structural signals tell you where the code is hard to understand. Operational signals tell you where the code is hard to change safely. Velocity signals tell you where the code is slowing you down. Together, they tell you where you actually have debt worth paying down.

Three categories of technical debt signals: structural, operational, and velocity metrics

Structural Debt Signals

These are the easiest to measure because tools already exist. Start with cyclomatic complexity - the number of branching paths through a function. Functions with cyclomatic complexity above 10 - 15 are hard to test and harder to understand. Functions above 20 are actively confusing. Calculate the distribution across your codebase. If 30% of your functions have complexity above 15, you have structural debt.

Then measure coupling. How many other modules does this module depend on? A module with 15+ external dependencies is structurally fragile. Changes to those dependencies ripple through it. Track this per major module.

Third: code duplication. Tools like Radon and Pylint (for Python) or ESLint (for JavaScript) can measure this. Duplication above 5 - 10% in a codebase signals that knowledge is scattered rather than consolidated. Someone wrote the same logic in three places instead of extracting a shared function.

Operational Debt Signals

This is where most teams miss the real picture. Structural metrics are easy to measure but disconnected from whether the code actually matters. Operational metrics connect to what slows you down.

Change failure rate is your first operational signal. For a given module, what percentage of PRs that touch it introduce a bug, break something else, or get reverted? Calculate this over a quarter. My benchmark: teams with <5% change failure rate on a module have high confidence changing it. Teams with 15%+ have learned to avoid it. Teams with 25%+ either don't touch it or have significant incident load.

Change failure rate benchmarks showing low, medium, and critical risk thresholds

Mean time to recovery - MTTR - when something in this module breaks, how fast do you fix it? If a bug in the auth module takes 30 minutes to diagnose and fix, but a bug in the logging module takes 3 minutes, the auth module has higher operational debt. Fast MTTR means the module is well-monitored, well-tested, and fast to iterate on. Slow MTTR means it's a black box.

Incident frequency per module. Some modules are touched frequently but never cause incidents. Some are touched infrequently and cause major incidents when they do. If a module represents 5% of your deployments but 25% of your incidents, that's debt you can see.

Velocity Debt Signals

This is the most important and the most overlooked. Velocity signals tell you where the code is actually slowing you down from a product perspective.

Compare the time to implement features in different areas of your codebase. Let's say a typical feature in your API layer takes 8 engineering days from PR to merged to deployed. A similar-scoped feature in your payment module takes 18 days. Same team, same process, different module. That's debt. The payment module has something - technical debt, poor testing, unclear architecture - that makes it slower to change. Measure this over two quarters. If velocity in a module drops 20 - 30% over time, debt is accumulating.

Track the percentage of a sprint spent on bug fixes vs. features. If you're spending 40% of your sprint on bugs in a particular module, but only 10% in another, you have debt to address. Not all bugs are debt - some are just bugs. But if they're concentrated in one area, that's architectural debt showing itself through defect density.

The insight here is this: debt that doesn't slow you down doesn't matter. Optimizing for perfect code health in a module nobody touches is waste. But debt that slows your feature velocity by 30%, or that causes incidents, or that makes every PR a nerve-wracking exercise - that's debt worth paying down.

Continuous Measurement, Not One-Time Audits

Most teams do a technical debt audit once a year. Someone spends a week analyzing the codebase, produces a 50-page report about all the things that are wrong, the team reads it, nods, and then does nothing. The report sits in Confluence and nothing changes.

What works is continuous monitoring. Pick a dashboard tool - Grafana, DataDog, New Relic, or custom dashboards - and track your three categories of signals over time. Structural metrics monthly. Operational metrics weekly. Velocity metrics every other sprint. When cyclomatic complexity spikes, when change failure rate jumps from 5% to 15%, when feature velocity in a module drops 30% - that's when you have a conversation with the team.

"Hey, we just noticed the auth module's change failure rate spiked from 6% to 18% this quarter. Do we understand why? Is this a signal that we need to allocate time to stabilize this?" Sometimes the answer is yes and you allocate a sprint to testing and refactoring. Sometimes the answer is "we just touched it more" and there's nothing to do. But you're making the decision with data, not with a feeling.

The Tooling Landscape

Most tools are strong in one area and weak in others.

SonarQube, CodeClimate, and CodeFactor measure structural debt well - complexity, duplication, security issues. They're weak on operational debt and velocity signals. They give you a score that feels authoritative but disconnects from actual business impact.

Datadog, New Relic, and Honeycomb are strong on operational metrics - MTTR, incident frequency, trace-level insights. They're weak on structural analysis because they work at the runtime level, not the code level.

Git-based tools like Glue, Velocity, and Gitprime measure velocity and change patterns. They're weak on structural analysis.

The gap: almost no tool connects all three. Most engineering leaders are stuck choosing: do I optimize for code health (which I can measure with SonarQube) or deployment health (which I can measure with DataDog)? The honest answer is that you need both, and most teams cobble together multiple tools and hope they tell a consistent story.

Tool comparison matrix showing strengths and weaknesses across structural, operational, and velocity metrics

Glue bridges this gap. We pull structural metrics from your codebase, operational metrics from your incident systems and deployment logs, and velocity metrics from your Git history. Then we show you the integrated picture: which modules have debt that's actually slowing you down.

The Prioritization That Works

Once you're measuring debt signals, prioritize this way: operational and velocity debt first, structural debt second. A module with high cyclomatic complexity that nobody touches is low priority. A module with 20% change failure rate and a 30% velocity decline is critical.

Debt prioritization matrix showing critical, quick wins, low priority, and monitor quadrants

Start with the modules where operational or velocity signals are worst. For those modules, measure structural debt and fix the biggest contributors. Run the cycle quarterly.

This is how technical debt becomes manageable. Not with heroic "debt quarters" where everything stops and engineers refactor for a month. But with continuous measurement, continuous prioritization, and continuous small improvements. Measure it, see it, act on it.

Frequently Asked Questions

Q: How much technical debt is normal?

A: Some debt is normal and rational. A module with 25–30% change failure rate might be acceptable if it's low-touch. But if it's on your critical path — high-frequency, high-impact — that's not acceptable. Track code quality metrics to set thresholds consciously rather than guessing.

Q: Doesn't focusing on metrics create perverse incentives?

A: Yes, if you only measure one thing. A team told "reduce your cyclomatic complexity" might just split functions into smaller functions that are still tangled. A team told "reduce change failure rate" might stop shipping and reduce risk to zero. Measure multiple signals — DORA metrics, cycle time, and structural quality together — and look at the whole picture.

Q: How long does it take to see improvements?

A: For velocity signals, 1 - 2 sprints after you pay down the debt. For operational signals, 2 - 4 weeks. For structural signals, it depends on whether you're shipping code. If you're shipping, the impact shows up in operational metrics fairly quickly.


Related Reading

  • Technical Debt: The Complete Guide for Engineering Leaders
  • Code Refactoring: The Complete Guide to Improving Your Codebase
  • DORA Metrics: The Complete Guide for Engineering Leaders
  • Software Productivity: What It Really Means and How to Measure It
  • Code Quality Metrics: What Actually Matters
  • Cycle Time: Definition, Formula, and Why It Matters
  • Technical Debt Reduction Playbook
  • Technical Debt Tracking - Full Lifecycle
  • What Is Technical Debt Prioritization?
  • What Is Technical Debt Tracking?
  • Technical Debt Lifecycle

Author

AM

Arjun Mehta

Principal Engineer

Tags

Technical Debt

SHARE

Keep reading

More articles

blog·Mar 1, 2026·13 min read

The 7 Types of Technical Debt (With Real Examples and How to Fix Each One)

Not all technical debt is created equal. Learn the 7 distinct types - from code debt to architecture debt to documentation debt - with real examples, detection methods, and remediation strategies for each.

AM

Arjun Mehta

Principal Engineer

Read
blog·Feb 23, 2026·8 min read

Cursor and Copilot Don't Reduce Technical Debt — Here's What Does

AI coding tools scale your existing patterns. They don't reduce debt. Here's what actually works: explicit refactoring, ADRs, and strategic modernization.

AM

Arjun Mehta

Principal Engineer

Read
blog·Feb 23, 2026·9 min read

AI Coding Tools Are Creating Technical Debt 4x Faster Than Humans

AI coding tools boost output 30% but increase defect density 40%. The math doesn't work. Here's what the data shows and what engineering leaders should do about it.

AM

Arjun Mehta

Principal Engineer

Read

Related resources

Glossary

  • What Is Code Health?
  • What Is Technical Debt Assessment?

Use Case

  • Technical Debt Lifecycle: Detection to Remediation to Verification
  • Glue for Technical Debt Management

Guide

  • The Engineering Manager's Guide to Code Health

Stop stitching. Start shipping.

See It In Action

No credit card · Setup in 60 seconds · Works with any stack