Glossary
Cycle time is the total elapsed time it takes to complete a single unit of work, from the moment active work begins until the work is ready for delivery.
Cycle time is the total elapsed time it takes to complete a single unit of work, from the moment active work begins until the work is ready for delivery.
In software development, that means how long it takes to move a change from "actively being worked on" to "deployed to production." It measures development velocity and process efficiency.
The key distinction: cycle time only counts active work time. It doesn't include time spent waiting in a queue, sitting in code review, or blocked on dependencies. That waiting time is part of lead time — a different metric entirely. I'll explain the difference below because I've watched teams confuse these two for years and optimize the wrong one.
The most common calculation:
Cycle Time = End Date - Start Date
In software development specifically:
Cycle Time = (Time PR Merged) - (Time Work Started)
Or in an Agile context:
Cycle Time = (Time Task Moved to Done) - (Time Task Started)
For averages:
Cycle Time = Total Time to Complete / Number of Units Completed
Example: if your team completed 40 tasks in a month (160 hours of work days), your average cycle time is 4 hours per task. If a single feature takes three days from start to deployment, the cycle time for that feature is three days.
At Salesken, we tracked cycle time per PR rather than per Jira ticket. A Jira ticket might sit "in progress" for a week while the engineer works on three different things. The PR tells you when code actually started flowing and when it shipped. That's the number that matters.
People confuse these constantly. At Salesken, I had a PM who kept saying "our cycle time is two weeks" when she meant lead time. The actual cycle time was 2-3 days. The other 10 days were backlog wait time and deployment queuing. Fixing cycle time wouldn't have helped her. Fixing the backlog prioritization process would have.
Cycle Time:
Lead Time:
Here's a concrete example. A task gets requested on Monday. It sits in the backlog for a week. On the following Monday, an engineer starts working. They finish on Wednesday. The feature ships on Thursday.
This distinction matters enormously for engineering leaders. A long lead time might mean your process is slow — or it might just mean your backlog is large. A long cycle time means the work itself is taking too long: either the task is genuinely complex, the engineer is context-switching, or they're blocked by dependencies.
At UshaOm, where I ran a team of 27 engineers building an e-commerce platform, our lead time was 3 weeks but our cycle time was 2 days. The gap was entirely backlog queue time. We didn't need faster engineers. We needed better prioritization and smaller batches entering the sprint.
Benchmarks vary by team size and work type. These are for individual PRs, not entire features:
| Team Type | Good | Average | Needs Improvement |
|---|---|---|---|
| Small startup (5-15 devs) | < 1 day | 1-3 days | > 5 days |
| Mid-size team (15-50 devs) | < 2 days | 2-5 days | > 7 days |
| Enterprise (50+ devs) | < 3 days | 3-7 days | > 10 days |
| Bug fixes | < 4 hours | 4-24 hours | > 2 days |
| Small features | < 3 days | 3-5 days | > 7 days |
| Large features | < 2 weeks | 2-4 weeks | > 6 weeks |
At Salesken, our median cycle time for bug fixes was about 6 hours — not great, but acceptable for a real-time voice AI system where most fixes touched the audio pipeline and required careful testing. Our feature cycle time averaged 3-4 days. The number I watched most closely was the 90th percentile: when that crept above 8 days, it meant something structural was wrong, usually a module with tangled code dependencies that slowed every change.
Breaking cycle time into phases is where the real insight lives. A 5-day cycle time is useless without knowing where the time goes.
Coding Time (30-40% of total). How long the developer spends writing and testing locally. If this is high, the task is probably poorly scoped or the code is too complex. At Salesken, our ML pipeline changes had coding times 3x longer than API changes — not because the engineers were slower, but because the code complexity of the pipeline required more local testing.
PR Review Wait Time (20-40%). Time between PR submission and first review. This is often the single biggest bottleneck. At UshaOm, we had no review SLAs for the first year. PRs would sit for 2-3 days because reviewers were busy with their own work. Once we set a 4-hour SLA ("you must leave a first review within 4 business hours"), our median cycle time dropped by 30% in the first month. Nothing else changed. Just the review SLA.
Review Iteration Time (10-20%). Back-and-forth between author and reviewer. Multiple rounds of comments and fixes. Clear code standards and automated linting reduce this. At Salesken, we found that PRs over 500 lines had 2.5x more review iterations than PRs under 200 lines. Not because the code was worse — because reviewers couldn't hold the full context, so they'd catch things in round two that they missed in round one.
Merge to Deploy Time (5-15%). Time from merge to production. Teams with solid CI/CD deploy in minutes. Teams with manual deployment windows can add days. We had a weekly deployment window at UshaOm initially. Moving to continuous deployment cut this phase from 3-4 days average to under 20 minutes.
This is the single highest-leverage change most teams can make. Smaller PRs get reviewed faster, have fewer bugs, and merge sooner. Aim for under 400 lines changed. Large PRs (1000+ lines) sit in review queues because nobody wants to start them. I've written about this in detail in PR Size and Code Review.
At Salesken, we set a soft limit of 300 lines per PR. Engineers who consistently submitted larger PRs were asked to break them up. After three months of enforcing this, our cycle time P50 dropped from 4.2 days to 2.8 days. The code wasn't different. The review process was just faster because reviewers could actually hold the full context.
Establish team agreements: "PRs get a first review within 4 business hours." Track compliance. This single change often accounts for 30-50% of cycle time improvement because review wait time is the biggest bottleneck on most teams.
Automated testing, linting, and deployment reduce the manual steps that add time. If your CI pipeline takes 30 minutes, that's 30 minutes of cycle time on every push. Invest in faster pipelines — the ROI is direct.
If cycle time is trending up, check whether the modules being changed have increasing complexity. At Salesken, we noticed cycle time on our analytics service creeping from 2 days to 5 days over a quarter. The service hadn't changed processes. But three months of fast feature development (with Cursor, no less) had introduced tight coupling between the analytics models. A dependency mapping exercise revealed 14 circular imports. We spent a sprint untangling them, and cycle time dropped back to 2.5 days.
Developers working on multiple things simultaneously have longer cycle times per task. Limit work-in-progress. At UshaOm, we moved from allowing 3 concurrent tasks per developer to 2, and individual cycle times dropped 20%. The math is counterintuitive — fewer tasks in progress means more tasks completed per sprint.
Cycle time feeds directly into two DORA metrics:
Elite DORA performers have lead times under one day. If your cycle time alone exceeds one day, you can't be an elite performer by definition. When we tracked this at Salesken, cycle time was our leading indicator — when cycle time increased, deployment frequency dropped about two weeks later. By the time deployment frequency shows the problem, the damage is already compounding.
Measuring cycle time without breaking it down. A 5-day cycle time tells you nothing. Is it 4 days of coding and 1 day of review? Or 1 day of coding and 4 days of review wait? The fix for each is completely different.
Optimizing coding speed when review is the bottleneck. I see teams adopt AI coding tools expecting cycle time to improve. It does improve coding time. But if 60% of your cycle time is review wait, cutting coding time in half only reduces total cycle time by 20%. Fix the biggest bottleneck first.
Ignoring structural causes. Cycle time creep isn't always a process problem. Sometimes the codebase is getting more complex and tightly coupled. No amount of process optimization fixes architectural decay. Code health and bus factor analysis reveal structural causes that process metrics miss.
Averaging across work types. A team with 4-hour bug fix cycle times and 3-week feature cycle times has an "average" of about 5 days. That average is meaningless. Segment by work type: bugs, small features, large features. Track each separately.
What is cycle time in agile?
Cycle time in agile measures the time from when a team starts working on a user story until it is done and delivered. It is different from lead time, which includes the time the story spends waiting in the backlog before work begins. Shorter cycle times indicate more efficient delivery processes.
How do you reduce cycle time?
Reduce cycle time by keeping work items small, limiting work in progress (WIP), automating testing and deployment, streamlining code review processes, removing handoff delays between teams, and eliminating unnecessary approval gates in your delivery pipeline.
Keep reading
Related resources
Blog