LibraryFoundationsDORA Metrics

F-02FOUNDATION

DORA Metrics

The four measures of software delivery performance. What they measure, what good looks like, and how to use them to improve.

Sources:Accelerate — Forsgren, Humble, KimDORA State of DevOps Research (2019–2023)

Video Lesson

A video lesson for this topic is in development. The library articles and mission exercises cover the same material in the meantime.

01

What is DORA?

DORA — the DevOps Research and Assessment team — is a research group founded by Dr. Nicole Forsgren, Jez Humble, and Gene Kim. Since 2014 they have published the State of DevOps Report, surveying tens of thousands of professionals annually to understand what separates high-performing software organizations from low performers.

Their key finding: software delivery performance can be measured objectively, and it predicts organizational performance — profitability, market share, and the ability to meet customer goals. Performance is not a soft concept. It is measurable.

DORA's research identified four metrics that together capture the speed and stability of software delivery. They are now widely used as the industry standard for measuring DevOps maturity.

02

The four metrics

DF

Deployment Frequency

How often does your organization deploy to production? This measures the throughput of your delivery system — how frequently value reaches users.

Elite

On demand (multiple/day)

High

1× per week – 1× per month

Medium

1× per month – 1× per 6 months

Low

Less than 1× per 6 months

LT

Lead Time for Changes

How long does it take for a commit to reach production? This measures the flow efficiency of your delivery pipeline — how quickly you can respond to business needs.

Elite

Less than 1 hour

High

1 day – 1 week

Medium

1 week – 1 month

Low

More than 6 months

CFR

Change Failure Rate

What percentage of production changes cause a degradation requiring remediation? This measures the quality of your delivery process — how often you introduce problems.

Elite

0–15%

High

16–30%

Medium

16–30%

Low

46–60%

MTTR

Mean Time to Restore

How long does it take to restore service when an incident occurs? This measures the resilience of your system — how quickly you recover from failure.

Elite

Less than 1 hour

High

Less than 1 day

Medium

1 day – 1 week

Low

More than 6 months

03

Speed and stability are not a trade-off

The most counterintuitive finding in DORA's research: teams that deploy more frequently also have lower change failure rates and faster recovery times. Speed and stability are not opposites.

The reason is simple: small, frequent deployments are inherently less risky than large, infrequent ones. When something breaks in a small deployment, the blast radius is contained and the cause is obvious. Stability comes from deploying often, not from deploying rarely.

Low performer mindset

Deploy infrequently to reduce risk
Large batches = more testing before release
Manual approval gates add safety
Result: slow, fragile, high-stress releases

Elite performer mindset

Deploy frequently to reduce risk per change
Small batches = easy to test and verify
Automated gates are faster and more reliable
Result: fast, stable, routine deployments
04

How to use the metrics

The DORA metrics are most useful as diagnostics, not as targets. Goodhart's Law applies: when a measure becomes a target, it ceases to be a good measure. A team that game their DF metric by deploying trivial changes has not improved their delivery system.

Use them to identify your constraint

Low DF with high LT? Your bottleneck is the pipeline. High CFR with slow MTTR? Your bottleneck is testing and observability. The metrics point to where to focus improvement effort.

Measure trends, not absolutes

Is your lead time improving quarter over quarter? Is your CFR declining? Trends reveal whether your improvements are working. A snapshot tells you where you are; a trend tells you if you are moving.

Compare to your past self

DORA benchmarks are useful for orientation, but comparison to your own historical performance is more actionable. You know your context; benchmark data abstracts it away.

05

Nexus Corp's DORA progression

The Nexus Corp missions are designed to move the organization from low performer to high performer across all four metrics. Here is the progression:

Snapshot

Deploy Freq

Lead Time

CFR

MTTR

Tier

Before M-01

1×/month

43 days

42%

72 hrs

LOW

After M-03

1×/month

14 days

18%

72 hrs

MED

After M-04

On demand

< 1 day

15%

48 hrs

HIGH

The path from low to high performer is not about adopting tools. It is about changing the system: shortening feedback loops, automating manual steps, reducing batch sizes, and building quality in rather than inspecting it in at the end.

06

Further reading

Accelerate — Forsgren, Humble, Kim

The book-length treatment of the DORA research. Chapters 2–4: the four key metrics, how to measure them, and what drives them.

DORA State of DevOps 2023

The most recent annual report. Benchmarks, trends, and the latest findings on what predicts software delivery performance.

DevOps Handbook — Part II

The technical practices that move the metrics. Chapters 10-14 map directly to the four DORA metrics.

DORA Quick Check

The official DORA assessment tool. Benchmarks your team against the research data across all four metrics.