Glueglue
AboutFor PMsFor EMsFor CTOsHow It Works
Log inTry It Free
Glueglue

The Product OS for engineering teams. Glue does the work. You make the calls.

Monitoring your codebase

Product

  • How It Works
  • Platform
  • Benefits
  • Demo
  • For PMs
  • For EMs
  • For CTOs

Resources

  • Blog
  • Guides
  • Glossary
  • Comparisons
  • Use Cases
  • Sprint Intelligence

Top Comparisons

  • Glue vs Jira
  • Glue vs Linear
  • Glue vs SonarQube
  • Glue vs Jellyfish
  • Glue vs LinearB
  • Glue vs Swarmia
  • Glue vs Sourcegraph

Company

  • About
  • Authors
  • Contact
AboutSupportPrivacyTerms

© 2026 Glue. All rights reserved.

Glossary

Cycle Time: Definition, Formula, and Why It Matters for Engineering Teams

Cycle time is the total elapsed time it takes to complete a single unit of work, from the moment active work begins until the work is ready for delivery.

February 24, 2026·10 min read

What Is Cycle Time?

Cycle time is the total elapsed time it takes to complete a single unit of work, from the moment active work begins until the work is ready for delivery.

In software development, that means how long it takes to move a change from "actively being worked on" to "deployed to production." It measures development velocity and process efficiency.

The key distinction: cycle time only counts active work time. It doesn't include time spent waiting in a queue, sitting in code review, or blocked on dependencies. That waiting time is part of lead time — a different metric entirely. I'll explain the difference below because I've watched teams confuse these two for years and optimize the wrong one.

Cycle Time Formula

The most common calculation:

Cycle Time = End Date - Start Date

In software development specifically:

Cycle Time = (Time PR Merged) - (Time Work Started)

Or in an Agile context:

Cycle Time = (Time Task Moved to Done) - (Time Task Started)

For averages:

Cycle Time = Total Time to Complete / Number of Units Completed

Cycle Time Formula Infographic

Example: if your team completed 40 tasks in a month (160 hours of work days), your average cycle time is 4 hours per task. If a single feature takes three days from start to deployment, the cycle time for that feature is three days.

At Salesken, we tracked cycle time per PR rather than per Jira ticket. A Jira ticket might sit "in progress" for a week while the engineer works on three different things. The PR tells you when code actually started flowing and when it shipped. That's the number that matters.

Cycle Time vs Lead Time

People confuse these constantly. At Salesken, I had a PM who kept saying "our cycle time is two weeks" when she meant lead time. The actual cycle time was 2-3 days. The other 10 days were backlog wait time and deployment queuing. Fixing cycle time wouldn't have helped her. Fixing the backlog prioritization process would have.

Cycle Time:

  • Starts when work actively begins
  • Ends when work is ready for delivery
  • Measures only active work time
  • Excludes waiting, queuing, and blocked time

Lead Time:

  • Starts when work is requested
  • Ends when work is delivered to the customer
  • Measures total elapsed time from request to delivery
  • Includes waiting, reviewing, deployment, everything

Here's a concrete example. A task gets requested on Monday. It sits in the backlog for a week. On the following Monday, an engineer starts working. They finish on Wednesday. The feature ships on Thursday.

  • Cycle Time: Wednesday minus second Monday = 2 days
  • Lead Time: Thursday minus first Monday = 10 days

Measurement Points Infographic

This distinction matters enormously for engineering leaders. A long lead time might mean your process is slow — or it might just mean your backlog is large. A long cycle time means the work itself is taking too long: either the task is genuinely complex, the engineer is context-switching, or they're blocked by dependencies.

At UshaOm, where I ran a team of 27 engineers building an e-commerce platform, our lead time was 3 weeks but our cycle time was 2 days. The gap was entirely backlog queue time. We didn't need faster engineers. We needed better prioritization and smaller batches entering the sprint.

Optimization Levers Infographic

What Is a Good Cycle Time?

Benchmarks vary by team size and work type. These are for individual PRs, not entire features:

Team TypeGoodAverageNeeds Improvement
Small startup (5-15 devs)< 1 day1-3 days> 5 days
Mid-size team (15-50 devs)< 2 days2-5 days> 7 days
Enterprise (50+ devs)< 3 days3-7 days> 10 days
Bug fixes< 4 hours4-24 hours> 2 days
Small features< 3 days3-5 days> 7 days
Large features< 2 weeks2-4 weeks> 6 weeks

At Salesken, our median cycle time for bug fixes was about 6 hours — not great, but acceptable for a real-time voice AI system where most fixes touched the audio pipeline and required careful testing. Our feature cycle time averaged 3-4 days. The number I watched most closely was the 90th percentile: when that crept above 8 days, it meant something structural was wrong, usually a module with tangled code dependencies that slowed every change.

Where Cycle Time Gets Stuck

Breaking cycle time into phases is where the real insight lives. A 5-day cycle time is useless without knowing where the time goes.

Coding Time (30-40% of total). How long the developer spends writing and testing locally. If this is high, the task is probably poorly scoped or the code is too complex. At Salesken, our ML pipeline changes had coding times 3x longer than API changes — not because the engineers were slower, but because the code complexity of the pipeline required more local testing.

PR Review Wait Time (20-40%). Time between PR submission and first review. This is often the single biggest bottleneck. At UshaOm, we had no review SLAs for the first year. PRs would sit for 2-3 days because reviewers were busy with their own work. Once we set a 4-hour SLA ("you must leave a first review within 4 business hours"), our median cycle time dropped by 30% in the first month. Nothing else changed. Just the review SLA.

Review Iteration Time (10-20%). Back-and-forth between author and reviewer. Multiple rounds of comments and fixes. Clear code standards and automated linting reduce this. At Salesken, we found that PRs over 500 lines had 2.5x more review iterations than PRs under 200 lines. Not because the code was worse — because reviewers couldn't hold the full context, so they'd catch things in round two that they missed in round one.

Merge to Deploy Time (5-15%). Time from merge to production. Teams with solid CI/CD deploy in minutes. Teams with manual deployment windows can add days. We had a weekly deployment window at UshaOm initially. Moving to continuous deployment cut this phase from 3-4 days average to under 20 minutes.

How to Improve Cycle Time

Reduce PR Size

This is the single highest-leverage change most teams can make. Smaller PRs get reviewed faster, have fewer bugs, and merge sooner. Aim for under 400 lines changed. Large PRs (1000+ lines) sit in review queues because nobody wants to start them. I've written about this in detail in PR Size and Code Review.

At Salesken, we set a soft limit of 300 lines per PR. Engineers who consistently submitted larger PRs were asked to break them up. After three months of enforcing this, our cycle time P50 dropped from 4.2 days to 2.8 days. The code wasn't different. The review process was just faster because reviewers could actually hold the full context.

Set Review SLAs

Establish team agreements: "PRs get a first review within 4 business hours." Track compliance. This single change often accounts for 30-50% of cycle time improvement because review wait time is the biggest bottleneck on most teams.

Automate Everything You Can

Automated testing, linting, and deployment reduce the manual steps that add time. If your CI pipeline takes 30 minutes, that's 30 minutes of cycle time on every push. Invest in faster pipelines — the ROI is direct.

Address Codebase Complexity

If cycle time is trending up, check whether the modules being changed have increasing complexity. At Salesken, we noticed cycle time on our analytics service creeping from 2 days to 5 days over a quarter. The service hadn't changed processes. But three months of fast feature development (with Cursor, no less) had introduced tight coupling between the analytics models. A dependency mapping exercise revealed 14 circular imports. We spent a sprint untangling them, and cycle time dropped back to 2.5 days.

Reduce Context Switching

Developers working on multiple things simultaneously have longer cycle times per task. Limit work-in-progress. At UshaOm, we moved from allowing 3 concurrent tasks per developer to 2, and individual cycle times dropped 20%. The math is counterintuitive — fewer tasks in progress means more tasks completed per sprint.

Cycle Time and DORA Metrics

Cycle time feeds directly into two DORA metrics:

  • Lead Time for Changes: DORA's lead time includes cycle time plus queue time. Improving cycle time directly improves lead time.
  • Deployment Frequency: Shorter cycle times enable more frequent deployments. Teams that deploy daily typically have cycle times under 1 day.

Elite DORA performers have lead times under one day. If your cycle time alone exceeds one day, you can't be an elite performer by definition. When we tracked this at Salesken, cycle time was our leading indicator — when cycle time increased, deployment frequency dropped about two weeks later. By the time deployment frequency shows the problem, the damage is already compounding.

Common Mistakes

Measuring cycle time without breaking it down. A 5-day cycle time tells you nothing. Is it 4 days of coding and 1 day of review? Or 1 day of coding and 4 days of review wait? The fix for each is completely different.

Optimizing coding speed when review is the bottleneck. I see teams adopt AI coding tools expecting cycle time to improve. It does improve coding time. But if 60% of your cycle time is review wait, cutting coding time in half only reduces total cycle time by 20%. Fix the biggest bottleneck first.

Ignoring structural causes. Cycle time creep isn't always a process problem. Sometimes the codebase is getting more complex and tightly coupled. No amount of process optimization fixes architectural decay. Code health and bus factor analysis reveal structural causes that process metrics miss.

Averaging across work types. A team with 4-hour bug fix cycle times and 3-week feature cycle times has an "average" of about 5 days. That average is meaningless. Segment by work type: bugs, small features, large features. Track each separately.


Related Reading

  • Lead Time: What It Is, How It Differs from Cycle Time, and Why It Matters
  • DORA Metrics: The Complete Guide for Engineering Leaders
  • Deployment Frequency: The DORA Metric That Reveals Your True Engineering Velocity
  • PR Size and Code Review: The Data Behind Smaller Pull Requests
  • CI/CD Pipeline: The Definitive Guide to Continuous Integration & Delivery
  • Clean Code: Principles, Practices, and the Real Cost of Messy Code

Frequently Asked Questions

What is cycle time in agile?

Cycle time in agile measures the time from when a team starts working on a user story until it is done and delivered. It is different from lead time, which includes the time the story spends waiting in the backlog before work begins. Shorter cycle times indicate more efficient delivery processes.

How do you reduce cycle time?

Reduce cycle time by keeping work items small, limiting work in progress (WIP), automating testing and deployment, streamlining code review processes, removing handoff delays between teams, and eliminating unnecessary approval gates in your delivery pipeline.

Keep reading

More articles

glossary·Feb 24, 2026·9 min read

Lead Time: Definition, Measurement, and How to Reduce It

Lead time is the total elapsed time from when work is requested or initiated until it is delivered to the customer or end user.

GT

Glue Team

Editorial Team

Read
glossary·Feb 23, 2026·6 min read

What Is an Engineering Feedback Loop?

Learn how engineering feedback loops drive improvement. Master tactical loops (fast) and architectural loops (insightful) for compound velocity gains.

AM

Arjun Mehta

Principal Engineer

Read
glossary·Mar 4, 2026·9 min read

AI Roadmap

An AI roadmap is a strategic plan that outlines how an organization will adopt, integrate, and scale artificial intelligence across its products and engineering processes.

VV

Vaibhav Verma

CTO & Co-founder

Read

Related resources

Blog

  • LinearB vs Jellyfish vs Swarmia: What Each Measures, What Each Misses, and When to Pick Something Else
  • Best AI Tools for Engineering Managers: What Actually Helps (And What's Just Noise)

Guide

  • DORA Metrics: The Complete Guide for Engineering Leaders
  • Software Productivity: What It Really Means and How to Measure It