Introduction
At Salesken, I spent a quarter obsessing over velocity — story points completed, features shipped, sprint burndowns looking great. Then I actually measured cycle time: the median was 11 days from first commit to production. We were "shipping fast" on paper but slow where it counted.
Your engineering team ships code every day. But how long does it actually take from the moment a developer starts work to the moment that work reaches production?
That's cycle time—and it's the most important metric most engineering teams aren't measuring.
Unlike lead time (which starts when work is requested) or velocity (which measures completed story points), cycle time measures the real elapsed time work spends in active development. It's the metric that reveals where your team loses hours: in review queues, waiting for approvals, slow CI/CD pipelines, or developers context-switching between tasks.
For engineering managers and CTOs trying to scale teams without burning out developers, cycle time is the diagnostic tool that turns gut feelings into actionable insights. This guide will show you how to measure it, benchmark your team against industry standards, and implement eight concrete tactics to reduce it.
1. What is Cycle Time? Definition and Core Concepts
Cycle time is the elapsed time between when a developer starts working on a task and when that work is deployed to production (or merged to main, depending on your definition).
The Clock Starts When Work Begins
A critical distinction: cycle time starts when development begins, not when the ticket is created. If a task sits in the backlog for three months before anyone touches it, those three months don't count toward cycle time. This is what separates cycle time from lead time.
Lead Time (Total):
[Backlog Queue] → [Active Dev] → [Review] → [Deploy]
← This includes waiting in backlog
Cycle Time (What matters):
[Active Dev] → [Review] → [Deploy]
← Clock starts HERE
Cycle Time vs. Lead Time: What's the Difference?
- Lead Time: How long from when a customer requests a feature to when it ships. Includes backlog grooming, prioritization, and waiting time.
- Cycle Time: How long from when development actually starts to when work ships. Reflects team efficiency and process bottlenecks.
For a startup juggling priorities, lead time might be 8 weeks (including a 6-week backlog). Cycle time might be 3 days (the actual development work). Improving cycle time is about removing friction once work starts.
Why This Distinction Matters
If your CTO asks, "Why can't we ship faster?" the answer lies in cycle time, not lead time. Longer backlogs (high lead time) aren't your problem during active development. Slow reviews, broken CI/CD, or blocked deployments (high cycle time) are.
2. Why Cycle Time Is the #1 Metric for Engineering Teams
If you could only track one metric, it should be cycle time. Here's why:
Cycle Time Captures Everything That Matters
Coding Speed: A developer who ships working code in 2 hours instead of 4 shows up immediately in cycle time.
Review Bottlenecks: If a PR sits in review for a day, that's a day of lost cycle time. This is often the biggest hidden cost.
Deployment Friction: A team with automated CI/CD ships in minutes. A team with manual gates waits hours. Cycle time makes this visible.
Queue Time: When work piles up and developers are waiting for reviewers or previous deployments to finish, cycle time grows.
Context Switching Overhead: When a developer works on three different tickets in one day, cycle time increases because the actual work is stretched across more time.
Cycle time is a leading indicator of team health. It correlates directly with:
- Developer satisfaction (shorter cycle time = faster feedback, more autonomy)
- Ship velocity (faster cycle time = more deployments per week)
- Defect rates (when you can ship quickly, you can iterate and fix issues)
- Ability to respond to production incidents
The Industry Reality Check
High cycle time is often a symptom of organizational dysfunction:
- Unclear prioritization (work starts, then gets deprioritized)
- Inadequate testing (reviews take longer because reviewers don't trust the code)
- Insufficient automation (manual testing, manual deployments)
- Siloed expertise (code review approval bottleneck)
Reducing cycle time forces teams to address these systemic issues.
3. The Anatomy of Cycle Time: Where Teams Actually Lose Time
Let's break down what happens between "work starts" and "code ships":
Development Cycle Time =
Coding Time + Pickup Time + Review Time + Deploy Time
Coding Time (20-40% of cycle time)
The time a developer actually spends writing code. This is the only part where visible "work" happens.
Reality: Most teams vastly underestimate how small this percentage is.
Pickup Time (5-15% of cycle time)
The delay between task creation and when the first developer starts working. This includes:
- Waiting for clarification
- Unblocking dependencies
- Context switching from previous work
Review Time (30-50% of cycle time)
The most commonly overlooked bottleneck. Review time includes:
- Waiting for a reviewer to look at the PR
- Back-and-forth discussions
- Waiting for requested changes to be addressed
The hidden cost: A single reviewer who's slow can bottleneck an entire team. If your code reviews have a 24-hour SLA but PRs sit for 2 days, you've just added 48 hours to every feature.
Deploy Time (5-20% of cycle time)
Time from "code is reviewed and approved" to "code is live in production."
This includes:
- Waiting for CI to run
- Manual deployment gates and approvals
- Waiting for previous deployments to finish
- Post-deployment validation
The Brutal Truth: It's Waiting, Not Coding
In most engineering organizations, less than 30% of cycle time is developers actually writing code. The remaining 70%+ is waiting for reviews, waiting for CI, waiting for approval, context switching, and blockers.
This is why hiring more developers doesn't reduce cycle time. You can't code faster if you're blocked waiting for feedback.
4. How to Measure Cycle Time: Data Sources and Best Practices
Measuring cycle time requires pulling what I've seen multiple sources. The good news: most of this data already exists in your tools.
Data Sources
Git / GitHub / GitLab
- Timestamp when a commit was first pushed (start of active coding)
- Timestamp when the PR was created
- Timestamp when the PR was merged
Jira / Linear / Project Management Tool
- When a task moved to "In Progress" (start of cycle time)
- When a task moved to "Done" or "Deployed"
- Custom fields marking deployment dates
CI/CD Platform (GitHub Actions, GitLab CI, CircleCI)
- When tests started running
- When deployment began
- When deployment completed
Monitoring Tools (Datadog, New Relic, etc.)
- When code reached production (post-deployment start time)
Defining the Boundaries: Start and End Points
Start Point Options:
- When a ticket is moved to "In Progress"
- When the first commit on a feature branch is pushed
- When development work first touches the codebase
(Recommendation: Use "moved to In Progress" for consistency, as it's a clear, deliberate action.)
End Point Options:
- When code is merged to main
- When code is deployed to staging
- When code is deployed to production
(Recommendation: Use production deployment for customer-facing work, main branch for internal tools.)
Handling Outliers and Edge Cases
Multi-day cycle times: If a single PR has cycle time of 3 weeks because it was deprioritized twice, should you include it?
Answer: Yes, include it—but analyze separately. Separate "normal" outliers (legitimate large features) from "anomaly" outliers (work that got stuck).
Paused work: If a developer starts a ticket, then it's blocked for a week waiting for a dependency, does that week count?
Answer: This is context-dependent. For process improvement, it should count (it reveals blocking issues). For coding efficiency, you might exclude it.
Hotfixes and urgent work: Production incidents skip the normal review queue. Track separately.
The Practical Approach
- Define your start and end points (production deployment recommended)
- Pull cycle time data for the last 30-90 days
- Calculate median and p95 (ignore outliers when calculating median)
- Segment by:
- Feature vs. bug vs. hotfix
- Team or squad
- Complexity (small, medium, large)
- Review in retrospectives: "Why did PR #234 take 5 days?"
5. Benchmarks: What "Good" Looks Like at Different Stages
Cycle time varies wildly by company stage. Comparing yourself to Netflix or Stripe is unfair if you're a 15-person startup.
Startup (<50 engineers)
- Median cycle time: 4-12 hours
- Target P95: < 24 hours
- Context: Fewer layers of review, faster decision-making, but less process discipline
How it happens: Small teams move fast because there are fewer approvers and less process overhead. Review happens quickly because everyone knows the codebase.
Growth-Stage (50-200 engineers)
- Median cycle time: 1-3 days
- Target P95: < 7 days
- Context: Growing process friction, multiple teams, more review gates
The challenge: As teams grow, the natural friction increases. More reviewers needed, more stakeholders to notify, more testing required.
Enterprise (200+ engineers)
- Median cycle time: 3-7 days
- Target P95: < 21 days
- Context: Multiple approval gates, compliance requirements, multi-team dependencies
The reality: Enterprise cycle time includes legitimate overhead (security reviews, compliance audits). But 80% of enterprises have unnecessarily long cycle times due to process bloat.
By Practice: Comparison Points
| Company Type | Low | Typical | High |
|---|---|---|---|
| Startups (trunk-based) | 2-4 hrs | 4-12 hrs | 24+ hrs |
| Scaling teams | 12-24 hrs | 1-3 days | 1+ weeks |
| Enterprises | 2-5 days | 3-7 days | 3+ weeks |
| Regulated (finance, healthcare) | 1-3 days | 7-14 days | 4+ weeks |
Setting Your Target
A good target depends on your tech strategy:
- High-velocity teams (shipping multiple times daily): Target < 4 hours
- Balanced teams (daily deployments): Target < 24 hours
- Stable teams (weekly releases): Target < 3 days
6. 8 Tactics to Reduce Cycle Time: Actionable Strategies for Engineering Leaders
Reducing cycle time requires attacking it from multiple angles. Here are eight proven tactics, ranked by impact and ease of implementation.
1. Implement Continuous Code Review SLAs (Impact: High, Effort: Low)
The Problem: Code reviews are the #1 cycle time killer. A PR waiting 24+ hours for a review adds a full day to every task.
The Solution:
- Set a team SLA: First review within 2 hours, follow-up reviews within 24 hours
- Use GitHub/GitLab notifications to alert reviewers
- Rotate code review responsibility daily (avoid silos)
- Track code review time as a team metric in standups
Specific Implementation:
Monday: Alice is primary reviewer
Tuesday: Bob is primary reviewer
Wednesday: Charlie is primary reviewer
Everyone knows it's their responsibility. Urgent PRs get async approval or skip certain reviewers.
Expected Impact: Reduce review cycle time by 50-70%
2. Automate Everything in CI/CD (Impact: High, Effort: Medium)
The Problem: Manual gates add hours. Waiting for CI to run adds friction.
The Solution:
- Auto-run all tests on every PR (no manual testing)
- Auto-deploy to staging on main merge (no manual staging deploys)
- Implement branch protection: require passing tests + 2 approvals, then auto-merge
- Use automated deployment to production on main (no manual prod deploys)
Specific Implementation Example:
# GitHub Actions
- When PR created: Run linting, unit tests, integration tests (5 min)
- When tests pass: Auto-add "ready to review" label
- When 2 approvals + tests pass: Auto-merge
- When merged to main: Auto-deploy to production
Expected Impact: Reduce deploy cycle time by 70-90%
3. Break Large Features Into Smaller PRs (Impact: High, Effort: Medium)
The Problem: A 500-line PR takes 3x longer to review than a 100-line PR. Large PRs get deprioritized.
The Solution:
- Set a team guideline: PRs should be reviewable in 15 minutes
- Break features into logical chunks (API endpoint, UI component, database migration as separate PRs)
- Use branch stacking or feature flags to ship incomplete features safely
Specific Example:
Instead of: "Add payment processing" (500 lines)
Break into:
- PR #1: Add payment schema migration (50 lines) → 30 min review
- PR #2: Add payment API endpoints (100 lines) → 45 min review
- PR #3: Add payment UI component (100 lines) → 45 min review
Total review time: 2 hours vs. 6+ hours for the monolithic PR
Expected Impact: Reduce review cycle time by 40-60%
4. Eliminate Waiting for External Dependencies (Impact: Medium, Effort: Medium)
The Problem: Work gets blocked waiting for another team, a vendor, or a third-party API.
The Solution:
- Use mocks/stubs for external dependencies during development
- Parallelize work: Design API contract early, one team builds backend, another builds frontend with mocks
- Establish SLAs for cross-team requests
Specific Implementation:
Backend team publishes API specification
Frontend team stubs API responses with JSON fixtures
Both teams work in parallel for 2 weeks
Real integration happens at the end
Expected Impact: Enable parallel work, reduce critical path by 30-40%
5. Implement Async Code Review Practices (Impact: Medium, Effort: Low)
The Problem: Synchronous reviews (waiting for real-time feedback) kill cycle time across timezones.
The Solution:
- Require written review comments, not Slack discussions
- Use GitHub/GitLab review templates for common feedback patterns
- Encourage reviewers to check PRs daily, not on-demand
- Enable "auto-approve" for trivial changes (docs, minor refactors)
Specific Implementation:
- Use GitHub code review templates with pre-written suggestions
- Set expectations: "Reviews checked at 10 AM, 2 PM, 4 PM daily"
- Create linting rules to auto-reject formatting issues (removes 20% of review comments)
Expected Impact: Reduce back-and-forth delay by 40-50%
6. Reduce Testing Friction (Impact: High, Effort: High)
The Problem: Slow tests, flaky tests, and manual testing slow down code review and deployment.
The Solution:
- Make unit tests run in < 5 seconds (split slow tests to integration tier)
- Fix flaky tests immediately (don't ignore them)
- Remove manual QA gates for CI-passing code
- Implement contract testing for API boundaries
Specific Implementation:
- Divide tests into fast (unit, < 5s) and slow (integration, < 30s) tiers
- Run fast tests on every commit, slow tests on main merge
- Delete any test that fails > 10% of the time and fix the root cause
Expected Impact: Reduce deployment cycle time by 50-70%
7. Establish Clear Acceptance Criteria Upfront (Impact: Medium, Effort: Low)
The Problem: Unclear requirements lead to back-and-forth reviews and rework.
The Solution:
- Define acceptance criteria before development starts
- Use "Definition of Done" checklist
- Include acceptance criteria in the ticket, not in Slack
Specific Implementation:
**Acceptance Criteria:**
- [ ] User can add new payment method
- [ ] New method appears in payment settings
- [ ] Old payment method can be deleted
- [ ] Tests cover happy path + error cases
- [ ] No manual QA bugs reported after 48 hours
Expected Impact: Reduce review cycles (rework) by 25-40%
8. Use Metrics-Driven Continuous Improvement (Impact: Medium, Effort: Low)
The Problem: Without measurement, you're flying blind.
The Solution:
- Track cycle time weekly, report in standups
- Identify outliers: "Why did PR #X take 5 days?"
- Run monthly retros focused on cycle time: "What blocked us?"
- Set team-owned cycle time targets
Specific Implementation:
Weekly standup agenda:
- Last week median cycle time: 2.3 days (target: < 2 days)
- P95 cycle time: 7.2 days (target: < 5 days)
- Slowest PR: #234, 10 days, blocker was vendor API
- Action: Follow up with vendor on timeline
Expected Impact: Enables discovery of systematic bottlenecks, 15-25% improvement over 3 months
7. Cycle Time vs Lead Time vs Throughput: When to Use Which Metric
Engineering teams often conflate three critical metrics. Here's how to use them strategically:
Cycle Time
- Measures: How fast your team moves once work starts
- Best for: Process optimization, identifying bottlenecks
- Owner: Engineering manager
- Action: "We need to reduce review time from 24 hours to 4 hours"
Lead Time
- Measures: Total time from request to delivery
- Best for: Customer expectations, roadmap planning
- Owner: Product manager
- Action: "We promised this feature in 4 weeks; it's waiting 2 weeks in the backlog"
Throughput (Velocity)
- Measures: How much work completes per unit time
- Best for: Capacity planning, sprint planning
- Owner: Team lead
- Action: "We complete 40 story points per sprint; this feature is 35 points"
The Relationship
Lead Time = Backlog Wait Time + Cycle Time + Deploy Time
If your lead time is 8 weeks but cycle time is 3 days:
→ Problem is backlog prioritization, not execution speed
→ Solution: Groom backlog faster, prioritize better
→ Hiring more developers won't help
If lead time is 8 weeks and cycle time is also 7 weeks:
→ Problem is execution speed
→ Solution: Improve testing, code review, deployment automation
→ Hiring might help if your team is undersized
Metrics Dashboard Template for Engineering Leaders
| Metric | Target | Actual | Trend | Owner |
|---|---|---|---|---|
| Median Cycle Time | < 24 hrs | 32 hrs | ↓ Improving | Engineering Manager |
| P95 Cycle Time | < 3 days | 5.2 days | → Stable | Engineering Manager |
| Code Review SLA | 4 hrs first review | 6 hrs | ↑ Degrading | Dev Lead |
| CI/CD Deployment Time | < 5 min | 8 min | ↑ Degrading | DevOps Lead |
| Lead Time | < 2 weeks | 3.1 weeks | → Stable | Product Manager |
| Team Throughput | 45 story pts/sprint | 42 pts | → Stable | Scrum Master |
8. How AI Agents Identify Cycle Time Bottlenecks Automatically
Modern engineering teams have too much data to analyze manually. AI agents can identify bottlenecks in real-time without human bias.
What AI Agents Can Do
Pattern Recognition: An AI agent analyzing your Git and Jira data can identify which types of work take longest (e.g., "payments features average 3 days, but auth features average 1 day").
Anomaly Detection: Automatically flag when a PR is taking longer than it should (e.g., "This data migration PR is on day 4; similar PRs complete in 6 hours").
Bottleneck Identification: "Bob is the only reviewer for mobile PRs, creating a 24-hour queue. This is adding 1 day to every mobile feature."
Predictive Analysis: "At current code review velocity, this 8-week roadmap will actually take 14 weeks. Recommend increasing reviewers or splitting features."
Trend Analysis: "Your median cycle time improved 15% this month. The driver: automated CI/CD and smaller PRs. Keep going."
Practical Implementation
Engineering teams increasingly use tools like GitHub's Insights, Gitprime, Glue, or Swarmia to automatically track cycle time and surface bottlenecks. AI agents go further—they can:
-
Correlate cycle time with code quality: "When cycle time dropped below 24 hours, defect rates increased by 3%. Recommend better testing."
-
Connect team changes to cycle time: "After onboarding two new engineers, code review SLA degraded by 40%. Recommend mentoring support."
-
Benchmark against similar teams: "Your cycle time is 3x higher than comparable SaaS companies. Here's why."
The best engineering organizations use AI-powered cycle time insights as a continuous feedback loop: measure, identify bottleneck, fix, measure improvement, repeat.
Cycle Time at Glue: Automating Your Path to Faster Engineering
This is where cycle time moves from theory into practice. Glue, an Agentic Product OS for engineering teams, is purpose-built to identify and eliminate cycle time bottlenecks automatically.
Rather than manually analyzing Git commits, PR dwell times, and CI/CD logs, Glue's agentic architecture continuously observes your engineering workflow and surfaces bottlenecks in real-time. When a code review queue backs up, when a deploy pipeline slows, or when a developer is context-switching between too many tasks, Glue identifies the issue and surfaces it to your engineering team.
How Glue Reduces Cycle Time
Automated Bottleneck Detection: Glue agents monitor your Git, CI/CD, and project management tools, automatically identifying where work stalls. Instead of waiting for a weekly metrics review, you know immediately when review time spikes.
Actionable Insights: Rather than a raw number ("Your cycle time is 2.3 days"), Glue provides context: "Review time is up 40% because Alice is on PTO. Bob is covering, but has 12 PRs in queue. Recommend deprioritizing 4 lower-priority PRs."
Workflow Optimization: Glue learns your team's patterns and recommends interventions. Small PRs? Reduce cycle time 35%. Async code review? Reduce wait time by 4 hours. Glue helps teams implement these at scale.
By combining AI-powered insights with your existing tools, Glue turns cycle time from a retrospective metric into a real-time optimization target.
Key Takeaways
-
Cycle time measures what matters: Active development efficiency, not total time from request to ship.
-
Most cycle time is waiting, not coding: Focus on removing review queues, CI friction, and deployment gates.
-
Measurement drives improvement: You can't optimize what you don't measure. Start tracking cycle time this week.
-
Small PRs, fast reviews, automated CI/CD: These three habits alone reduce cycle time by 60-70%.
-
Benchmarks vary by stage: A 3-day median cycle time is good for a 100-person company but unacceptable for a 20-person startup.
-
Process alone isn't enough: Use AI-powered insights to identify bottlenecks your team can't see manually.
-
Cycle time is a leading indicator: Teams with low cycle time have happier developers, fewer production incidents, and faster time-to-value.
Start measuring cycle time this week. Pick one bottleneck (usually code review). Fix it. Measure the improvement. Then move to the next one. Over three months, you'll see 30-50% improvements in delivery speed.
Related Reading
- Lead Time: Definition, Measurement, and How to Reduce It
- Deployment Frequency: The DORA Metric That Reveals Your True Engineering Velocity
- DORA Metrics: The Complete Guide for Engineering Leaders
- Change Failure Rate: The DORA Metric That Reveals Your Software Quality
- PR Size and Code Review: Why Smaller Is Better
- Software Productivity: What It Really Means and How to Measure It