Glueglue
AboutFor PMsFor EMsFor CTOsHow It Works
Log inTry It Free
Glueglue

The Product OS for engineering teams. Glue does the work. You make the calls.

Monitoring your codebase

Product

  • How It Works
  • Platform
  • Benefits
  • Demo
  • For PMs
  • For EMs
  • For CTOs

Resources

  • Blog
  • Guides
  • Glossary
  • Comparisons
  • Use Cases
  • Sprint Intelligence

Top Comparisons

  • Glue vs Jira
  • Glue vs Linear
  • Glue vs SonarQube
  • Glue vs Jellyfish
  • Glue vs LinearB
  • Glue vs Swarmia
  • Glue vs Sourcegraph

Company

  • About
  • Authors
  • Contact
AboutSupportPrivacyTerms

© 2026 Glue. All rights reserved.

Glossary

Lead Time: Definition, Measurement, and How to Reduce It

Lead time is the total elapsed time from when work is requested or initiated until it is delivered to the customer or end user.

February 24, 2026·9 min read

What Is Lead Time?

Lead time is the duration between when a feature request is made and when it's delivered to users. Not from when work starts — from when the request enters the system.

Example: a customer requests a feature on January 1. Engineering starts work on January 15. The feature ships on March 1. Lead time = 59 days. Not 45 days.

That difference matters. Lead time includes both "time waiting to start" and "time actually working." At Salesken, our sales team would tell customers "we can build that in two weeks" because that's how long the engineering work took. But the customer experienced six weeks from request to delivery because the feature sat in the backlog for a month before anyone touched it. The sales team was quoting cycle time. The customer was experiencing lead time.

Lead Time Formula Infographic

Why Lead Time Matters for Product Teams

Lead time measures how responsive a product team actually is. Can you respond quickly to market shifts? To customer requests? To competitive threats?

When lead time is long, two things happen:

Window-of-opportunity loss. Market windows are temporary. At Salesken, we once identified a gap in how competitors handled multi-language call coaching. We had the technical capability to ship it in three weeks of engineering time. But by the time it cleared prioritization, design review, and dependency resolution, five months had passed. A competitor shipped their version in month four. Five months of lead time on three weeks of work.

Feedback delay. If it takes three months from learning about a problem to shipping the fix, feedback loops are glacial. You can't learn from what customers need if you can't respond to what they tell you. At UshaOm, our e-commerce platform, we reduced lead time on checkout flow changes from 6 weeks to 10 days. The result wasn't just faster shipping — it was better decisions, because we could iterate three times in the time it previously took to ship once.

Lead Time vs Cycle Time

They measure different things, and confusing them leads to optimizing the wrong lever.

Lead time = request to delivery. What customers experience.

Cycle time = work starts to delivery. What engineering experiences.

If lead time is 60 days but cycle time is 10 days, the 50-day gap is time spent not working. That gap is usually:

  • Waiting for a prioritization decision ("do we actually want to build this?")
  • Waiting for engineering capacity ("nobody's available to start")
  • Waiting for dependencies ("we need to refactor system X first")
  • Waiting in a deployment queue ("next release window is Thursday")

Measurement Stages Infographic

Identifying that gap tells you where the bottleneck is. At Salesken, when I first broke down our lead time, I expected the bottleneck to be engineering execution. It wasn't. 55% of our lead time was pre-engineering: prioritization debates, design approvals, dependency resolution. We could have doubled engineering speed and only improved lead time by 20%.

How to Measure Lead Time

Start date: when was the feature requested? In GitHub, when the issue was created. In Jira, when it entered the backlog. Not when it moved to "in progress."

End date: when did it ship to users? Not when code merged — when it deployed to production and was available.

Lead time = end date minus start date.

Measure this for features over a quarter. Track the median, not the average. Averages get skewed by that one feature that took 6 months because nobody could agree on the spec. The median tells you what a typical request experiences.

At Salesken, we tracked lead time by category: bug fixes (median 4 days), small features (median 18 days), large features (median 45 days). The aggregate "lead time" number was meaningless — a mix of 4-day bug fixes and 45-day features averaged to something that described nothing real.

How to Improve Lead Time

Improving lead time requires working on both cycle time and queue time. Most teams focus exclusively on cycle time (making engineering faster) and ignore queue time (making decisions faster). In my experience, queue time is the bigger lever for most teams.

Improving Queue Time

Make faster prioritization decisions. How long does it take to decide whether to build something? At UshaOm, prioritization happened in a weekly planning meeting. If you missed Monday's meeting, your feature waited a week for the next one. We switched to async prioritization with a 48-hour SLA on decisions. Queue time for new requests dropped from 12 days average to 3.

Have available capacity. If the team is always 100% utilized, every new request queues behind everything else. Having 15-20% buffer capacity means urgent requests don't stack up. This is counterintuitive — it feels like slack. It's actually responsiveness. The best explanation I've read of this is in Don Reinertsen's The Principles of Product Development Flow: utilization above 80% causes queue times to grow exponentially.

Reduce cross-team dependencies. If your feature requires another team to deliver something first, you're blocked until they do. At Salesken, our mobile team depended on the platform team for API changes. Every mobile feature had a hidden 2-3 week dependency. We invested a sprint in giving the mobile team direct access to create their own API endpoints within a sandboxed schema. Mobile lead time dropped 40%.

Improving Cycle Time

Reduce scope. Smaller features ship faster. Break large features into incremental deliveries. Ship the 80% that's straightforward, then iterate on the remaining 20%.

Improve code health. Codebases with high test coverage and clear architecture ship faster. Engineers spend less time understanding existing code and less time debugging. At Salesken, our well-tested payment module had cycle times 3x shorter than our poorly-tested analytics module, even for comparable-sized changes.

Invest in CI/CD. Fast pipelines, automated testing, one-click deploys. Every manual step between "code complete" and "deployed" adds to cycle time. We cut our deployment process from 45 minutes of manual steps to a 12-minute automated pipeline. That saved 33 minutes per deployment, multiplied by 8-10 deployments per week.

Reduce code dependencies. Features that touch tightly coupled modules require coordinating changes across files, services, sometimes teams. Loosely coupled code lets engineers change one thing without touching ten others.

Reduction Strategies Infographic

Lead Time and Product Strategy

Lead time constraints shape what features get built. If your lead time is 6 months, you can't respond to fast-moving market opportunities. You're building based on a backlog committed half a year ago.

At Salesken, we served enterprise customers who expected customizations. Our early lead time of 8-10 weeks for customer-specific features was too slow — prospects would choose competitors who promised faster delivery. Reducing lead time to 3-4 weeks wasn't just an engineering improvement. It was a sales enablement strategy. Our win rate on deals requiring customization went up noticeably.

Knowing your lead time lets you make strategic decisions: "Our lead time is 90 days. Can we accept that for this market? If not, what needs to change?"

Lead Time and DORA Metrics

Lead time for changes is one of the four DORA metrics. Elite performers have lead times under one day. High performers are under one week. Medium performers under one month.

Most teams I've worked with are in the "medium" range — and the gap between medium and high isn't engineering speed. It's decision speed and deployment automation. The engineering work might take a day, but the PR sits in review for two days, then waits for a deployment window, then gets batched with other changes. Each of those waits is lead time.

Deployment frequency is the natural complement: shorter lead times enable higher deployment frequency, and higher deployment frequency forces shorter lead times. They're a virtuous cycle.

Common Misconceptions

"Faster lead time is always better." Not always. Some features legitimately take time. Rushing a security-critical feature from 60 days to 10 days might introduce risk. But consistently long lead times (100+ days) for routine work signals a systemic problem.

"Improving lead time requires more engineers." Rarely. It usually requires better prioritization (don't start things you're not sure about), smaller scope (ship incrementally), and fewer dependencies (architect for independent teams). At Salesken, adding two engineers to a team with a 45-day lead time didn't change the lead time. Fixing the 25-day queue time did.

"Lead time only matters for startups." No. If a large customer reports a critical bug, how long until it's fixed? If a regulatory change requires a product update, how fast can you respond? These are lead time questions regardless of company size.


Frequently Asked Questions

Q: What's a good lead time?

Depends on work type. Bug fixes: days. Small features: 1-3 weeks. Large features: 1-3 months. If bug fixes consistently take months, something is structurally wrong. If large features ship in days, you might be under-scoping.

Q: How do we measure lead time without good tooling?

Start manually. Pick 10 features that shipped recently. Find the date each was requested. Find the date each deployed. Calculate the gap. You now have data. Tooling makes it automatic, but the insight comes from the measurement, not the tool.

Q: What if lead time varies wildly?

It will. That's why you segment by work type and use medians, not averages. If your P50 is 2 weeks but your P90 is 3 months, the story isn't "our lead time is 2 weeks." The story is "most things ship in 2 weeks but something is blocking the long-tail items." Investigate the P90.


Related Reading

  • Cycle Time: Definition, Formula, and Why It Matters
  • DORA Metrics: The Complete Guide for Engineering Leaders
  • Deployment Frequency: The DORA Metric That Reveals Your True Engineering Velocity
  • Software Productivity: What It Really Means and How to Measure It
  • The Product Manager's Guide to Understanding Your Codebase
  • Code Dependencies: The Complete Guide

Keep reading

More articles

glossary·Feb 24, 2026·10 min read

Cycle Time: Definition, Formula, and Why It Matters for Engineering Teams

Cycle time is the total elapsed time it takes to complete a single unit of work, from the moment active work begins until the work is ready for delivery.

GT

Glue Team

Editorial Team

Read
glossary·Feb 23, 2026·6 min read

What Is an Engineering Feedback Loop?

Learn how engineering feedback loops drive improvement. Master tactical loops (fast) and architectural loops (insightful) for compound velocity gains.

AM

Arjun Mehta

Principal Engineer

Read
glossary·Mar 4, 2026·9 min read

AI Roadmap

An AI roadmap is a strategic plan that outlines how an organization will adopt, integrate, and scale artificial intelligence across its products and engineering processes.

VV

Vaibhav Verma

CTO & Co-founder

Read

Related resources

Blog

  • LinearB vs Jellyfish vs Swarmia: What Each Measures, What Each Misses, and When to Pick Something Else
  • Best AI Tools for Engineering Managers: What Actually Helps (And What's Just Noise)

Guide

  • DORA Metrics: The Complete Guide for Engineering Leaders
  • Software Productivity: What It Really Means and How to Measure It