What is Lead Time for Changes?
When I started measuring lead time at Salesken, I expected it to be around 3 days. The actual number was 9 days. Most of that wasn't development — it was waiting. Waiting for code review, waiting for QA, waiting for a deployment window. The code was done in 2 days; the system took another 7 to get it live. That gap between "done" and "deployed" is exactly what lead time for changes reveals.
Lead time for changes is one of the four core DORA (DevOps Research and Assessment) metrics, measuring the elapsed time between when you commit code and when that code runs in production. It's not just a metric—it's a window into your team's agility and your organization's ability to respond to market demands.
The DORA definition is precise: lead time for changes is the median time it takes for a commit to get into production. This includes all the stages between initial commit and live deployment: code review, testing, approval processes, and the actual deployment itself.
The Measurement Start and End Points
Understanding where to start and stop the clock is critical for accurate measurement. The start point is when code is first committed to your version control system (typically the main or develop branch). The end point is when that code is actively running in your production environment and serving real users.
What often gets confusing is what happens before the commit. Some teams ask: "Should we include the time from when the PR was opened?" The answer is no—the DORA definition specifically measures from commit to production. Time spent in code review, approval, or even uncommitted work doesn't count toward lead time for changes, though it's worth tracking separately as part of your overall development cycle time.
Why Lead Time for Changes Matters
Lead time for changes isn't just a vanity metric. It's directly correlated with organizational performance, customer satisfaction, and your ability to compete in dynamic markets.
Correlation with Team Performance
in my experience, the State of DevOps Report consistently shows that elite-performing teams (those with lead times under one day) ship features faster and with higher reliability. They're not compromising quality—they're achieving both speed and stability through better processes, automation, and organizational design.
When lead time is low, teams can iterate quickly based on user feedback, experiment with new features, and fix bugs in production before they cause widespread damage. This rapid feedback loop compounds over time, leading to better product decisions and team morale.
Customer Satisfaction and Competitive Advantage
Customers don't care about your internal processes—they care about when features ship and when problems get fixed. Organizations with short lead times ship features faster, respond to user requests quicker, and resolve critical issues in hours rather than weeks.
In competitive markets, this speed differential is the difference between capturing market share and losing it. The team that can ship a critical security patch in hours rather than days is the team that retains customer trust. The product team that can test and deploy new features weekly instead of quarterly is the team that learns faster and wins market position.
Technical Health and Predictability
Low lead time often correlates with better code quality, more comprehensive testing, and more stable deployments. This might seem counterintuitive—how can shipping faster lead to better quality? The answer is that elite teams automate heavily. When testing and deployment are automated and reliable, teams can ship with confidence. When these processes are manual and fragile, teams slow down precisely to reduce risk.
How to Measure Lead Time Accurately
Measuring lead time sounds straightforward, but in practice, teams make systematic mistakes that distort their data and lead to wrong conclusions.
Defining the Starting Event
The DORA standard is clear: start when code is committed. But which commit? In a typical workflow:
- Developer opens a PR
- Code is reviewed
- Changes are requested, developer commits again
- PR is approved and merged to main
- CI pipeline runs
- Code is deployed
The measurement starts at step 2 or 4—when code is merged to your main branch (or the branch that represents "ready for production"). Don't start the clock when the PR is opened; that time belongs to cycle time, not lead time.
For teams using trunk-based development (everyone commits directly to main), the starting point is obvious: the moment code hits main. For teams with longer-running feature branches, synchronize on the merge commit as the start.
Defining the Ending Event
The ending event is where teams get confused. Production in your context might mean different things:
- Deployed to production infrastructure: Code is running on production servers
- Available to all users: Code is behind a feature flag and visible to 100% of your user base
- Available to first users: Code is behind a feature flag but enabled for early adopters
For lead time measurement, use when code is deployed to production infrastructure, regardless of whether it's hidden behind a feature flag. If you use feature flags (which you should), the toggle-on event is separate and distinct from deployment.
Handling Hotfixes vs. Features
Hotfixes often follow a different path than features. A critical security patch might go through an abbreviated approval process, jump the queue, or bypass certain testing stages.
Track hotfixes separately from feature lead times. Include them in your overall aggregate metrics, but analyze them separately. Elite teams often have sub-hour lead times for hotfixes specifically because they've optimized that critical path.
Accounting for Queue Time and Rework
Lead time includes queue time—the hours a PR sits waiting for review, or the time a deployment request waits in queue behind other deployments. It also includes time spent on rework after failed tests or code review feedback.
If a PR is opened and sits for 3 days waiting for review, then reviewed and approved, then merged and deployed, the lead time includes that 3-day wait. This is intentional: queue time is part of your overall lead time and reveals capacity bottlenecks.
DORA Benchmarks: Where Does Your Team Stand?
The State of DevOps Report provides clear benchmarks. Here's how to interpret them:
Elite Performers: Lead time for changes < 1 day
- These teams ship multiple times per day
- They've automated testing, deployment, and approval processes
- Typically practicing trunk-based development or short-lived branches
High Performers: 1 day to 1 week
- Shipping multiple times per week
- Strong CI/CD practices with some manual gates
- Code review is streamlined but not fully automated
Medium Performers: 1 week to 1 month
- Shipping 2-4 times per month
- Manual approval processes exist
- Testing is partially automated
- Deployment windows or deployment frequency restrictions exist
Low Performers: > 6 months
- Shipping less frequently than quarterly
- Significant manual processes in code review and deployment
- Testing is largely manual
- High coordination overhead for releases
Where does your team fall? Most organizations are in the medium band, which is where significant optimization opportunities exist.
7 Strategies to Reduce Lead Time for Changes
Reducing lead time requires attacking multiple dimensions: process, tooling, and culture. Here are the highest-impact strategies.
1. Adopt Trunk-Based Development
Trunk-based development means all developers commit frequently to a single main branch. This eliminates the queue time of feature branch reviews and the merge conflict resolution that slows down longer-running branches.
The counterintuitive advantage: fewer merge conflicts and faster integration. When 5 developers work on separate feature branches for 2 weeks, integrating is painful. When they commit to main multiple times daily, conflicts are smaller and caught immediately.
Trunk-based development requires discipline: every commit must not break the build, and feature incompleteness is hidden behind feature flags, not branch isolation.
2. Keep Pull Requests Small
A 50-line PR gets reviewed and merged in hours. A 500-line PR sits in review for days. Smaller PRs move faster through the system because reviewers understand them quickly and approve with confidence.
Aim for PRs that represent 30-60 minutes of work. This might feel fragmented, but it compounds: 10 small PRs merged in a day beats 1 large PR merged after a week.
Small PRs also reduce rework. When code review feedback requires changes, the scope of rework is contained.
3. Implement Automated Code Review
Automated code review tools catch style violations, security issues, and common mistakes before humans see the code. This removes entire categories of comments from the review process, allowing human reviewers to focus on logic and design.
Tools like SonarQube, CodeClimate, and similar platforms run on every PR and flag issues instantly. Combined with branch protection rules that require passing checks before merge, you eliminate the "fix the linter" feedback cycles.
4. Optimize Your CI/CD Pipeline
Slow CI pipelines are lead time killers. If your test suite takes 30 minutes to run, every PR merge is a 30-minute wait. If deployment to staging takes 20 minutes, that's another 20-minute gate.
Optimize aggressively:
- Run fast tests first (unit tests before integration tests)
- Run tests in parallel
- Cache dependencies and build artifacts
- Use staging environments that mirror production but deploy instantly
- Separate build time from test time
Aim for a full CI/CD pipeline that completes in under 15 minutes. Elite teams run it in under 5 minutes.
5. Use Feature Flags to Decouple Deploy from Release
Deploying code and releasing features to users are different events. A feature flag lets you deploy incomplete work safely by hiding it from users, then flip the switch to release when ready.
This is critical for reducing lead time: your lead time measures deployment, not release. With feature flags, incomplete features or experiments can be deployed and sitting in production for days before being released to users. This removes the "wait until it's completely done" requirement that adds weeks to cycle time.
6. Reduce Approval Bottlenecks
Many organizations require multiple approvals before code can merge. Engineering manager approval, security approval, compliance approval. These sequential approvals multiply lead time.
Optimize approval processes:
- Automate approvals where possible (automated security scanning approves low-risk changes)
- Use code ownership rules to require only relevant reviewers
- Run approvals in parallel instead of sequence
- Remove unnecessary approval layers (if security already approved via automation, the manager approval is redundant)
Consider lightweight peer review instead of hierarchical approval. Any senior engineer can approve, not just managers.
7. Invest in Automated Testing
Manual testing is the enemy of speed. If a QA engineer must manually test every deployment, you can deploy at most a few times per day. If testing is automated, you can deploy dozens of times per day.
Implement testing layers:
- Unit tests (run in seconds, catch logic errors)
- Integration tests (run in minutes, catch component interactions)
- End-to-end tests (run in 5-10 minutes, catch user workflows)
- Smoke tests in production (run for 1-2 minutes post-deployment, catch environmental issues)
Aim for 80%+ test coverage. Not all code needs to be tested equally—test the critical paths heavily, the edge cases less.
Common Pitfalls to Avoid
Measuring Only the Happy Path
If you measure lead time only for PRs that merged on the first try with no feedback, you're measuring a fictional metric. Include every PR that eventually made it to production, including those that were rejected, reverted, or required multiple rounds of feedback.
Ignoring Queue Time
Lead time includes queue time. If reviewers are overloaded and PRs sit in queue for days, that's part of your lead time. Don't measure "active review time" and pretend queue time doesn't count. It counts because it delays production.
Excluding Rework and Failed Deployments
If a deployment fails and requires a hotfix, that time counts. If code fails testing and requires rework, that time counts. These are part of real-world lead time. Elite teams have low lead times despite this because they fail fast and recover quickly.
Not Separating Hotfixes from Features
Hotfixes often follow an entirely different process with higher priority and abbreviated approval. Track them separately or you'll distort your metrics. A security hotfix deployed in 30 minutes is good, but if it's mixed with a 2-week feature deployment, your aggregate metric obscures both achievements.
Measuring from PR Open Instead of Commit
PR open time includes waiting for people to get around to creating the PR. This is wasted time that doesn't reflect your deployment process. Always measure from merge/commit to production.
How AI Agents Can Automatically Track and Optimize Lead Time
Modern engineering teams are adopting AI agents to continuously monitor and improve lead time metrics. Rather than waiting for quarterly reports, these systems provide real-time visibility and suggest specific optimizations.
Automated tracking capabilities include:
- Real-time lead time calculation across every deployment, broken down by feature, team, and deployment type
- Anomaly detection that alerts when a PR is taking unusually long in review or when CI pipelines slow down
- Bottleneck identification that reveals exactly where time is being lost (approval queue, testing, deployment staging)
- Trend analysis that shows whether your team's lead time is improving, degrading, or plateauing
- Comparative metrics that benchmark your team against similar teams and against your own historical performance
Beyond tracking, AI agents can suggest optimizations: "This approval is taking 48% longer than your median. Would you like to add more approvers?" or "Your test suite is the slowest part of your pipeline. Running these tests in parallel could save 8 minutes per deployment."
Some teams use agents to actively optimize: automatically merging PRs that pass all checks and have approval, automatically parallelizing test runs, or dynamically routing deployments based on current queue depth.
How Glue Helps Engineering Teams Master Lead Time
For engineering managers and CTOs, understanding lead time is the first step. Acting on that understanding is where most teams struggle. Build vs. buy decisions, tooling investments, and process changes require coordination across multiple systems and teams.
Glue is an Agentic Product OS designed specifically for engineering teams. It integrates with your entire development stack—GitHub, GitLab, Jira, Jenkins, monitoring tools, and more—to provide unified visibility and autonomous optimization.
Rather than manually checking GitHub for PR age, Slack for bottlenecks, and Jira for deployment status, Glue's AI agents continuously monitor your lead time for changes across all these systems. When a PR gets stuck in review, Glue alerts the team. When your CI pipeline slows down, Glue identifies the failing test. When approval bottlenecks emerge, Glue suggests process improvements backed by your actual data.
For teams serious about reaching elite performance, Glue acts as a force multiplier for your engineering leadership. It removes the manual work of metric collection and insight discovery, letting you focus on the strategic changes that actually move the needle: simplifying approval processes, investing in test automation, and removing queue bottlenecks from your pipeline. With real-time lead time tracking and AI-powered optimization suggestions, you can compress weeks of analysis into hours and measure the impact of each change immediately.
Key Takeaways
Lead time for changes is more than a metric—it's a measure of your engineering organization's effectiveness. Elite teams don't sacrifice quality for speed; they achieve both through better automation, simpler processes, and continuous optimization.
Start by measuring your current lead time accurately: from commit to production deployment, including all queue time and rework. Compare yourself against DORA benchmarks. Then systematically attack the biggest bottlenecks: automate code review and testing, reduce approval gates, adopt trunk-based development, and keep PRs small.
The gap between medium performers (1-month lead time) and elite performers (<1-day lead time) is not a 30x increase in engineering talent. It's 30 days of process improvements, tooling investments, and cultural shifts. Every week you compress your lead time, your team ships faster, learns quicker, and gains competitive advantage.
Related Reading
- Lead Time: Definition, Measurement, and How to Reduce It
- Cycle Time: Definition, Formula, and Why It Matters
- Deployment Frequency: The DORA Metric That Reveals Your True Engineering Velocity
- DORA Metrics: The Complete Guide for Engineering Leaders
- Change Failure Rate: The DORA Metric That Reveals Your Software Quality
- PR Size and Code Review: Why Smaller Is Better