Glueglue
AboutFor PMsFor EMsFor CTOsHow It Works
Log inTry It Free
Glueglue

The Product OS for engineering teams. Glue does the work. You make the calls.

Monitoring your codebase

Product

  • How It Works
  • Platform
  • Benefits
  • Demo
  • For PMs
  • For EMs
  • For CTOs

Resources

  • Blog
  • Guides
  • Glossary
  • Comparisons
  • Use Cases
  • Sprint Intelligence

Top Comparisons

  • Glue vs Jira
  • Glue vs Linear
  • Glue vs SonarQube
  • Glue vs Jellyfish
  • Glue vs LinearB
  • Glue vs Swarmia
  • Glue vs Sourcegraph

Company

  • About
  • Authors
  • Contact
AboutSupportPrivacyTerms

© 2026 Glue. All rights reserved.

Glossary

DORA Metrics

DORA metrics are four key software delivery metrics identified by the DevOps Research and Assessment team.

March 4, 2026·10 min read

I started tracking DORA metrics at Salesken after a quarter where we shipped a lot but broke even more. These four numbers gave me the first honest picture of our delivery health — and the clarity to fix what mattered.

DORA metrics are four key software delivery performance metrics identified by the DevOps Research and Assessment (DORA) team at Google. Originally published in the Accelerate book by Dr. Nicole Forsgren, Jez Humble, and Gene Kim, these metrics have become the industry standard for measuring engineering team effectiveness.

The four DORA metrics are:

  1. Deployment Frequency — How often your team deploys to production
  2. Lead Time for Changes — How long it takes from code commit to production deployment
  3. Change Failure Rate — What percentage of deployments cause failures in production
  4. Mean Time to Recovery (MTTR) — How quickly your team recovers from a production failure

Together, these metrics capture both the speed and stability of software delivery. Teams that excel at all four consistently outperform their peers in business outcomes.


Why DORA Metrics Matter

DORA metrics matter because they are the only engineering metrics with rigorous academic research backing their correlation to business outcomes. The DORA team surveyed over 32,000 professionals across multiple years and found that:

  • High-performing teams deploy 973x more frequently than low performers
  • High performers have 6,570x faster lead times from commit to deploy
  • High performers have 3x lower change failure rates
  • High performers recover from incidents 6,570x faster

These are not just vanity metrics. The in my experience, that teams with better DORA metrics also have:

  • Higher organizational performance (profitability, market share)
  • Lower employee burnout
  • Higher job satisfaction
  • Better ability to meet business goals

The 4 DORA Metrics Explained

1. Deployment Frequency

What it measures: How often your organization successfully releases to production.

Why it matters: High deployment frequency indicates small batch sizes, automated deployment pipelines, and confidence in your release process. Teams that deploy frequently can ship features faster and respond to customer feedback more quickly.

How to measure it: Count the number of successful production deployments per day, week, or month. Include all deployment types (feature releases, bug fixes, configuration changes).

Performance LevelDeployment Frequency
EliteOn-demand (multiple deploys per day)
HighBetween once per day and once per week
MediumBetween once per week and once per month
LowBetween once per month and once every six months

Common pitfalls:

  • Counting deployments that do not reach end users (feature-flagged off) can inflate this number
  • Microservices architectures naturally have higher frequency; normalize per service if needed
  • Hotfixes and rollbacks should be counted separately from planned deployments

2. Lead Time for Changes

What it measures: The time from when a developer commits code to when that code is running in production.

Why it matters: Short lead times mean your pipeline is efficient and your team can iterate quickly. Long lead times indicate bottlenecks in code review, testing, approval processes, or deployment pipelines.

How to measure it: Track the median time from the first commit in a pull request to the moment that change is live in production. Use your CI/CD data (GitHub Actions, GitLab CI, Jenkins) combined with deployment timestamps.

Performance LevelLead Time for Changes
EliteLess than one hour
HighBetween one day and one week
MediumBetween one week and one month
LowBetween one month and six months

Where time is typically lost:

  • Code review queues (waiting for reviewers)
  • Manual QA cycles
  • Change advisory board (CAB) approvals
  • Deployment windows (only deploying on Tuesdays)
  • Environment provisioning

3. Change Failure Rate

What it measures: The percentage of deployments that result in a degraded service or require remediation (rollback, hotfix, patch).

Why it matters: Change failure rate balances deployment frequency. A team deploying 50 times per day with a 40% failure rate is not performing well. High performers deploy frequently AND reliably.

How to measure it: Divide the number of deployments that caused incidents or required rollback by the total number of deployments in the same period.

Performance LevelChange Failure Rate
Elite0-5%
High5-10%
Medium10-15%
Low16-30%+

How to reduce it:

  • Invest in automated testing (unit, integration, end-to-end)
  • Use feature flags to decouple deployment from release
  • Implement canary deployments and progressive rollouts
  • Improve code review quality and coverage
  • Add pre-deployment validation checks

4. Mean Time to Recovery (MTTR)

What it measures: How quickly your team restores service after a production incident.

Why it matters: Failures are inevitable. What matters is how quickly you detect and recover. Low MTTR requires good monitoring, clear incident response processes, and the ability to quickly diagnose and fix issues.

How to measure it: Track the median time from when a production incident is detected to when service is fully restored. Use your incident management tool (PagerDuty, Opsgenie, incident.io) for this data.

Performance LevelMean Time to Recovery
EliteLess than one hour
HighLess than one day
MediumBetween one day and one week
LowMore than one week

How to improve MTTR:

  • Implement comprehensive monitoring and alerting
  • Create runbooks for common failure modes
  • Practice incident response through game days
  • Ensure at least 2-3 people can debug each critical system (bus factor matters here)
  • Automate rollback procedures

How to Measure DORA Metrics

There are several approaches to measuring DORA metrics, ranging from manual surveys to fully automated tooling:

Manual surveys. The simplest approach. Ask your team: "How often did we deploy last month? How long does it take to get a commit to production? What percentage of our deployments caused issues? How quickly did we recover from our last incident?" This works for small teams starting out.

CI/CD pipeline data. Most CI/CD platforms (GitHub Actions, GitLab CI, Jenkins, CircleCI) track deployment frequency and lead time natively. Pull this data from their APIs.

Incident management data. Tools like PagerDuty, Opsgenie, and incident.io track MTTR. Change failure rate can be derived by correlating incident timestamps with deployment timestamps.

Dedicated DORA platforms. Tools like Sleuth, Glue, Glue, Swarmia, and Faros AI aggregate data from multiple sources and calculate DORA metrics automatically. These provide dashboards, trends, and team-level breakdowns.

Codebase intelligence tools. Glue calculates engineering health metrics including code change velocity and team collaboration patterns that complement DORA metrics with deeper codebase-level insights.


DORA Metrics Benchmarks (2026)

Based on the latest State of DevOps Report and industry data:

MetricEliteHighMediumLow
Deployment FrequencyMultiple per dayWeekly to dailyMonthly to weeklyMonthly to biannual
Lead Time for Changes< 1 hour1 day - 1 week1 week - 1 month1-6 months
Change Failure Rate0-5%5-10%10-15%16-30%
MTTR< 1 hour< 1 day1 day - 1 week> 1 week

Key insight from the research: Speed and stability are NOT tradeoffs. Elite performers are both faster AND more reliable. The common belief that "moving fast breaks things" is a myth. Teams with better practices achieve both.


DORA Metrics vs. Other Engineering Metrics

DORA vs. SPACE. The SPACE framework (Satisfaction, Performance, Activity, Communication, Efficiency) was proposed by researchers at GitHub and Microsoft as a broader framework. DORA metrics focus specifically on delivery performance, while SPACE captures developer experience more holistically. They are complementary, not competing.

DORA vs. Velocity. Sprint velocity (story points per sprint) measures planning accuracy, not delivery performance. A team can have high velocity while deploying once a month. DORA metrics measure what actually reaches production.

DORA vs. Lines of Code. Lines of code is a poor proxy for productivity. DORA metrics measure outcomes (deployments, stability) rather than outputs (code written).


Common Misconceptions

"DORA metrics are only for DevOps teams." Wrong. DORA metrics measure the entire software delivery process, from development through deployment. They are relevant to engineering leadership, product teams, and anyone who cares about how fast and reliably software reaches users.

"We need to optimize all four metrics simultaneously." Start with one. Most teams benefit from focusing on deployment frequency first, because increasing frequency naturally drives improvements in the other three. When you deploy small batches frequently, lead time drops, failures are easier to diagnose, and recovery is faster.

"High deployment frequency means chaos." The opposite. Teams that deploy frequently typically have better automation, better testing, and better processes. Low deployment frequency often indicates manual, risky deployment processes.

"DORA metrics can be gamed." Any metric can be gamed. Deploying empty commits increases frequency. Ignoring incidents reduces change failure rate. The solution is to use all four metrics together and focus on trends rather than absolute numbers.


Frequently Asked Questions

Q: What are the 4 DORA metrics? A: The four DORA metrics are: (1) Deployment Frequency, how often you deploy to production, (2) Lead Time for Changes, time from commit to production, (3) Change Failure Rate, percentage of deployments causing failures, and (4) Mean Time to Recovery, how quickly you recover from incidents.

Q: How do you measure DORA metrics? A: You can measure DORA metrics through manual surveys, CI/CD pipeline data, incident management tools, or dedicated platforms like Sleuth, Glue, or Faros AI. Most teams start with CI/CD data for deployment frequency and lead time, and incident management data for MTTR and change failure rate.

Q: What is a good deployment frequency? A: Elite teams deploy multiple times per day. High-performing teams deploy between daily and weekly. If you are deploying less than once per month, there is significant room for improvement. The key is to deploy small batches frequently rather than large batches infrequently.

Q: How do DORA metrics relate to team performance? A: Research by the DORA team shows that teams with better DORA metrics consistently achieve better business outcomes including higher profitability, market share, and employee satisfaction. The four metrics together capture both the speed and stability of software delivery.


Related Reading

  • Deployment Frequency: The DORA Metric That Reveals Your True Engineering Velocity
  • Change Failure Rate: The DORA Metric That Reveals Your Software Quality
  • Mean Time to Recovery: The Complete Guide to Faster Incident Resolution
  • Cycle Time: Definition, Formula, and Why It Matters
  • Lead Time: Definition, Measurement, and How to Reduce It
  • Software Productivity: What It Really Means and How to Measure It

Keep reading

More articles

glossary·Mar 4, 2026·9 min read

AI Roadmap

An AI roadmap is a strategic plan that outlines how an organization will adopt, integrate, and scale artificial intelligence across its products and engineering processes.

VV

Vaibhav Verma

CTO & Co-founder

Read
glossary·Feb 24, 2026·9 min read

Lead Time: Definition, Measurement, and How to Reduce It

Lead time is the total elapsed time from when work is requested or initiated until it is delivered to the customer or end user.

GT

Glue Team

Editorial Team

Read
glossary·Feb 24, 2026·10 min read

Cycle Time: Definition, Formula, and Why It Matters for Engineering Teams

Cycle time is the total elapsed time it takes to complete a single unit of work, from the moment active work begins until the work is ready for delivery.

GT

Glue Team

Editorial Team

Read

Related resources

Comparison

  • Glue vs Jellyfish: Engineering Investment vs Engineering Reality
  • Glue vs Swarmia: Team Workflows vs System Structure