Glue

AI codebase intelligence for product teams. See your product without reading code.

Product

  • How It Works
  • Benefits
  • For PMs
  • For EMs
  • For CTOs

Resources

  • Blog
  • Guides
  • Glossary
  • Comparisons
  • Use Cases

Company

  • About
  • Authors
  • Support
© 2026 Glue. All rights reserved.
RSS
Glue
For PMsFor EMsFor CTOsHow It WorksBlogAbout
BLOG

Why Your Sprint Velocity Is Lying to You

The moment your team's sprint velocity becomes a performance metric, it stops being useful.

SS
Sahil SinghFounder & CEO
June 30, 20268 min read

The moment your team's sprint velocity becomes a performance metric, it stops being useful.

I say this as someone who's watched dozens of engineering teams game their story points, inflate estimates to look productive, and then become prisoner to a number that bears no relation to actual output. Sprint velocity - that seemingly innocent measure of how many story points a team completes per sprint - has become the most misused metric in agile software development.

The damage is insidious because velocity feels scientific. It's quantifiable. It trends upward or downward. It lets managers forecast delivery dates. It makes capacity planning look predictable. So we've built entire organizational structures around it: velocity reports, velocity targets, velocity-based bonuses, and worst of all, velocity-based commitments.

Then we wonder why our teams are burned out and our software quality is declining.

The Four Ways Sprint Velocity Lies to You

1. Velocity Inflation: The Story Point Arms Race

Let me describe a familiar scenario: Your team completes 40 story points per sprint for three straight sprints. Now you're "reliable." Now leadership expects 40 points forever.

What actually happens next?

The team discovers that 40 points is only achievable if they cut corners. So they start pointing things differently. That authentication feature that was 8 points last quarter? Now it's 5 points because "we've done this before." The refactoring work that should be 13 points? It's suddenly a 3-point "spike" because it doesn't fit the template.

The points don't change because the work changed. The points change because the team discovered that inflating estimates is how you survive predictable delivery expectations.

Gallup research on goal setting shows that teams will optimize for whatever metric you measure - not for the actual outcome you want. When you measure velocity, teams optimize velocity. They game estimates, defer uncertainty, and avoid reporting blockers that might lower the number.

The velocity metric doesn't measure productivity. It measures how good your team is at hiding their real workload.

2. Comparing Velocity Across Teams Is Meaningless

This is where sprint velocity becomes genuinely dangerous.

Your backend team has a velocity of 45. Your frontend team has a velocity of 28. Naturally, you conclude that the backend team is more productive, or more efficient, or should take on harder problems.

This is false reasoning, and it's poisonous.

Story points are relative estimates. They're calibrated within a team's context, experience, and estimation style. One team's 8-point story might be another team's 3-point story. One team might estimate conservatively; another might be wildly optimistic. One team might include meetings and interruptions in their capacity calculations; another might only count "coding time."

The moment you cross team boundaries, story points become apples-to-oranges comparisons that breed resentment, dysfunctional behavior, and resource misallocation.

Yet somehow, this is a daily practice in organizations using velocity as a management tool. I've seen companies restructure teams based on relative velocities, reallocate work to "more productive" teams, and even adjust bonuses based on velocity comparisons. None of this is justified by the data.

3. Velocity as a Commitment Tool Is Actively Harmful

The worst abuse of sprint velocity is using it as a commitment mechanism.

"The team's committed to 40 points this sprint" might sound reasonable. It's not. It's a guarantee the team makes about inherently uncertain work, and when the work is more complex than estimated, the team chooses between two bad options: work weekends to hit the number, or miss the commitment and face consequences.

This creates the Hawthorne effect - the phenomenon where people change their behavior when they know they're being measured. Your team doesn't work faster; they work under stress. They skip testing. They take shortcuts. They merge code without review. They avoid writing documentation. They defer refactoring.

The sprint velocity stays stable while the codebase quality decays.

Over time, the cost accumulates: more bugs escape to production, new features take longer to build, onboarding slows down, and you need increasingly complex features to stay on schedule. What looked like sustained velocity was actually deferred technical debt, making every sprint after harder.

The teams that truly maintain consistent velocity are the ones where velocity is descriptive, not prescriptive. Velocity tells you what actually happened; it doesn't predict what will happen.

4. Velocity Obscures Real Problems

Sprint velocity is a lagging indicator, and worse, it's a heavily filtered one. If your team's velocity is flat at 35 points for six months, that tells you very little:

  • Is the team working on harder problems?
  • Are they dealing with more interruptions?
  • Has the codebase become harder to change?
  • Are they spending more time in meetings?
  • Has team turnover affected expertise?
  • Are they actually slower, or did their pointing just get more realistic?

Velocity doesn't tell you any of this. It just tells you a number. So leaders guess at the explanation, and often guess wrong.

The Metrics That Actually Matter

If sprint velocity is deceptive, what should you measure instead?

Cycle Time

Cycle time - the elapsed time from when work starts until it reaches production - is the most useful metric you can track. It's:

  • Objective: Measured in days or hours, not relative estimates
  • Honest: You can't inflate it without shipping code
  • Actionable: Long cycle times point to specific bottlenecks (code review delays, testing queues, deployment friction)

Track cycle time by work category (features, bugs, technical work) and watch for trends. An increase in cycle time is a real signal that something is slowing your team down.

Throughput

How many pieces of work does your team actually complete per sprint or week? Not story points - actual deliverables.

Count features shipped, bugs fixed, pull requests merged. Throughput is less sensitive to estimation style and more resistant to gaming. You can't claim a piece of work is "done" if it isn't really done.

Flow Efficiency

Of the total time elapsed from start to finish, what percentage is actual work time versus waiting time?

High flow efficiency (70%+) means your work is moving smoothly through your process. Low flow efficiency (30-40%) means work is sitting in queues, waiting for review, blocked on dependencies, or stuck in meetings.

This metric reveals bottlenecks that velocity completely obscures.

Escaped Defects and Quality Metrics

How many bugs escape to production per sprint? What's your change failure rate (the percentage of deployments that cause incidents)?

These are leading indicators of quality. If your velocity is stable but escaped defects are increasing, your team is cutting corners. The velocity number is lying; these metrics tell the truth.

Codebase Health

Code complexity, test coverage, dependency age, code duplication, and technical debt ratios are far better predictors of future velocity than past velocity ever was.

A team inheriting a clean, well-tested codebase with modern dependencies will move faster than a team inheriting a tangle of legacy code, regardless of what the previous team's velocity metrics said.

Tools that analyze your actual codebase - measuring complexity distribution, testing gaps, dependency health, and documentation coverage - give you a much more honest assessment of your team's capacity and constraints than any velocity chart.

The Path Forward

If your organization is heavily dependent on sprint velocity for planning and management, here's how to transition:

Start dual-tracking. Measure velocity as you always have, but also start tracking cycle time, throughput, and escaped defects. Let the new metrics run in parallel for two to three sprints.

Stop using velocity as a commitment mechanism. Forecast based on average throughput, not on promised story points. Let teams complete their best work without the pressure of hitting a predetermined number.

Use velocity for retrospectives, not predictions. Look at actual velocity trends to inform capacity planning, but add context: What changed? What problems did we solve? What slowed us down? Use velocity as a starting point for conversation, not as the answer.

Make code health visible. Measure and publicly track the metrics that actually predict whether your team can move fast: complexity distribution, test coverage, dependency age, and technical debt. These are the real constraints on velocity.

The hardest part of moving away from sprint velocity isn't the metrics - it's the culture shift. Leaders lose what feels like predictability. Teams lose the simple scorecard. But what you gain is honesty: you see what's really happening in your code and your process, and you can actually address the problems instead of just measuring around them.

Sprint velocity isn't evil. It's just useless as a management tool. The sooner you stop optimizing for it, the faster your team will actually go.


What metrics is your team actually using to drive decisions? If you're measuring code quality and real-world outcomes, you're already ahead. The teams that combine velocity data with codebase health metrics and complexity analysis tend to find that their actual performance accelerates while their stress decreases.

[ AUTHOR ]

SS
Sahil SinghFounder & CEO

SHARE

RELATED

Keep reading

blogJun 24, 202611 min

Technical Debt Is Not a Metaphor - Here's How to Put a Dollar Figure on It

Ward Cunningham introduced the technical debt metaphor in 1992, and it was useful. Talking about debt helped engineering teams communicate with business stakeholders about the cost of shortcuts. But metaphors have limits. The moment you want to make an actual decision about whether to refactor or ship the next feature, a metaphor breaks down. You need numbers.

SS
Sahil SinghFounder & CEO
blogJun 26, 202613 min

The Bus Factor Problem: What Happens When Your Best Engineer Leaves

Your lead backend engineer walks into your office on a Tuesday morning and tells you they're leaving. Two weeks notice. They found a new opportunity. They're excited about it.

SS
Sahil SinghFounder & CEO
blogJul 3, 20267 min

How to Convince Your CTO to Invest in Developer Experience

You know the problem. Your team loses two hours a day to slow builds. New engineers take three weeks to understand the codebase. Your CI/CD pipeline feels like it was built in 2015. And when developers finally ship code, half the bugs should have been caught earlier.

SS
Sahil SinghFounder & CEO

See your codebase without reading code.

Get Started — Free