Glossary
DORA metrics are four key software delivery metrics identified by the DevOps Research and Assessment team.
DORA metrics are four key software delivery performance metrics identified by the DevOps Research and Assessment (DORA) team at Google. Originally published in the Accelerate book by Dr. Nicole Forsgren, Jez Humble, and Gene Kim, these metrics have become the industry standard for measuring engineering team effectiveness.
The four DORA metrics are:
Together, these metrics capture both the speed and stability of software delivery. Teams that excel at all four consistently outperform their peers in business outcomes.
DORA metrics matter because they are the only engineering metrics with rigorous academic research backing their correlation to business outcomes. The DORA team surveyed over 32,000 professionals across multiple years and found that:
These are not just vanity metrics. The research shows that teams with better DORA metrics also have:
What it measures: How often your organization successfully releases to production.
Why it matters: High deployment frequency indicates small batch sizes, automated deployment pipelines, and confidence in your release process. Teams that deploy frequently can ship features faster and respond to customer feedback more quickly.
How to measure it: Count the number of successful production deployments per day, week, or month. Include all deployment types (feature releases, bug fixes, configuration changes).
| Performance Level | Deployment Frequency |
|---|---|
| Elite | On-demand (multiple deploys per day) |
| High | Between once per day and once per week |
| Medium | Between once per week and once per month |
| Low | Between once per month and once every six months |
Common pitfalls:
What it measures: The time from when a developer commits code to when that code is running in production.
Why it matters: Short lead times mean your pipeline is efficient and your team can iterate quickly. Long lead times indicate bottlenecks in code review, testing, approval processes, or deployment pipelines.
How to measure it: Track the median time from the first commit in a pull request to the moment that change is live in production. Use your CI/CD data (GitHub Actions, GitLab CI, Jenkins) combined with deployment timestamps.
| Performance Level | Lead Time for Changes |
|---|---|
| Elite | Less than one hour |
| High | Between one day and one week |
| Medium | Between one week and one month |
| Low | Between one month and six months |
Where time is typically lost:
What it measures: The percentage of deployments that result in a degraded service or require remediation (rollback, hotfix, patch).
Why it matters: Change failure rate balances deployment frequency. A team deploying 50 times per day with a 40% failure rate is not performing well. High performers deploy frequently AND reliably.
How to measure it: Divide the number of deployments that caused incidents or required rollback by the total number of deployments in the same period.
| Performance Level | Change Failure Rate |
|---|---|
| Elite | 0-5% |
| High | 5-10% |
| Medium | 10-15% |
| Low | 16-30%+ |
How to reduce it:
What it measures: How quickly your team restores service after a production incident.
Why it matters: Failures are inevitable. What matters is how quickly you detect and recover. Low MTTR requires good monitoring, clear incident response processes, and the ability to quickly diagnose and fix issues.
How to measure it: Track the median time from when a production incident is detected to when service is fully restored. Use your incident management tool (PagerDuty, Opsgenie, incident.io) for this data.
| Performance Level | Mean Time to Recovery |
|---|---|
| Elite | Less than one hour |
| High | Less than one day |
| Medium | Between one day and one week |
| Low | More than one week |
How to improve MTTR:
There are several approaches to measuring DORA metrics, ranging from manual surveys to fully automated tooling:
Manual surveys. The simplest approach. Ask your team: "How often did we deploy last month? How long does it take to get a commit to production? What percentage of our deployments caused issues? How quickly did we recover from our last incident?" This works for small teams starting out.
CI/CD pipeline data. Most CI/CD platforms (GitHub Actions, GitLab CI, Jenkins, CircleCI) track deployment frequency and lead time natively. Pull this data from their APIs.
Incident management data. Tools like PagerDuty, Opsgenie, and incident.io track MTTR. Change failure rate can be derived by correlating incident timestamps with deployment timestamps.
Dedicated DORA platforms. Tools like Sleuth, LinearB, Jellyfish, Swarmia, and Faros AI aggregate data from multiple sources and calculate DORA metrics automatically. These provide dashboards, trends, and team-level breakdowns.
Codebase intelligence tools. Glue calculates engineering health metrics including code change velocity and team collaboration patterns that complement DORA metrics with deeper codebase-level insights.
Based on the latest State of DevOps Report and industry data:
| Metric | Elite | High | Medium | Low |
|---|---|---|---|---|
| Deployment Frequency | Multiple per day | Weekly to daily | Monthly to weekly | Monthly to biannual |
| Lead Time for Changes | < 1 hour | 1 day - 1 week | 1 week - 1 month | 1-6 months |
| Change Failure Rate | 0-5% | 5-10% | 10-15% | 16-30% |
| MTTR | < 1 hour | < 1 day | 1 day - 1 week | > 1 week |
Key insight from the research: Speed and stability are NOT tradeoffs. Elite performers are both faster AND more reliable. The common belief that "moving fast breaks things" is a myth. Teams with better practices achieve both.
DORA vs. SPACE. The SPACE framework (Satisfaction, Performance, Activity, Communication, Efficiency) was proposed by researchers at GitHub and Microsoft as a broader framework. DORA metrics focus specifically on delivery performance, while SPACE captures developer experience more holistically. They are complementary, not competing.
DORA vs. Velocity. Sprint velocity (story points per sprint) measures planning accuracy, not delivery performance. A team can have high velocity while deploying once a month. DORA metrics measure what actually reaches production.
DORA vs. Lines of Code. Lines of code is a poor proxy for productivity. DORA metrics measure outcomes (deployments, stability) rather than outputs (code written).
"DORA metrics are only for DevOps teams." Wrong. DORA metrics measure the entire software delivery process, from development through deployment. They are relevant to engineering leadership, product teams, and anyone who cares about how fast and reliably software reaches users.
"We need to optimize all four metrics simultaneously." Start with one. Most teams benefit from focusing on deployment frequency first, because increasing frequency naturally drives improvements in the other three. When you deploy small batches frequently, lead time drops, failures are easier to diagnose, and recovery is faster.
"High deployment frequency means chaos." The opposite. Teams that deploy frequently typically have better automation, better testing, and better processes. Low deployment frequency often indicates manual, risky deployment processes.
"DORA metrics can be gamed." Any metric can be gamed. Deploying empty commits increases frequency. Ignoring incidents reduces change failure rate. The solution is to use all four metrics together and focus on trends rather than absolute numbers.
Q: What are the 4 DORA metrics? A: The four DORA metrics are: (1) Deployment Frequency, how often you deploy to production, (2) Lead Time for Changes, time from commit to production, (3) Change Failure Rate, percentage of deployments causing failures, and (4) Mean Time to Recovery, how quickly you recover from incidents.
Q: How do you measure DORA metrics? A: You can measure DORA metrics through manual surveys, CI/CD pipeline data, incident management tools, or dedicated platforms like Sleuth, LinearB, or Faros AI. Most teams start with CI/CD data for deployment frequency and lead time, and incident management data for MTTR and change failure rate.
Q: What is a good deployment frequency? A: Elite teams deploy multiple times per day. High-performing teams deploy between daily and weekly. If you are deploying less than once per month, there is significant room for improvement. The key is to deploy small batches frequently rather than large batches infrequently.
Q: How do DORA metrics relate to team performance? A: Research by the DORA team shows that teams with better DORA metrics consistently achieve better business outcomes including higher profitability, market share, and employee satisfaction. The four metrics together capture both the speed and stability of software delivery.
Keep reading