Skip to content

DORA Metrics

DORA (DevOps Research and Assessment) metrics provide a standardized way to measure software delivery performance. GuideMode implements all four key DORA metrics through the Deployment Flow cube.

DORA is a research program that identified four key metrics that distinguish high-performing technology organizations. These metrics measure both speed (deployment frequency, lead time) and stability (change failure rate, MTTR).

DORA MetricGuideMode MeasureSource Cube
Deployment FrequencyproductionDeploymentCountDeployment Flow
Lead Time for ChangesavgMergeToDeployMinutesDeployment Flow
Change Failure RatechangeFailureRateDeployment Flow
Mean Time to RecoverymedianMttrHoursDeployment Flow

What it measures: How often code deploys to production.

MeasureDescription
countTotal deployments (all environments)
productionDeploymentCountProduction deployments only
successfulProductionDeploymentCountSuccessful production deployments

How to calculate: Filter by time dimension (day, week, month) to calculate frequency.

Benchmarks:

LevelFrequency
EliteMultiple deploys per day
HighBetween once per day and once per week
MediumBetween once per week and once per month
LowBetween once per month and once every six months

What it measures: Time from code commit to production deployment.

GuideMode provides two lead time measures:

AspectValue
Start PointPR merged
End PointDeployment created
UnitMinutes
NULL whenNo linked merged PR

Measures:

  • avgMergeToDeployMinutes - Average pipeline time
  • medianMergeToDeployMinutes - Median pipeline time (recommended)
AspectValue
Start PointIssue created
End PointDeployment created
UnitDays
NULL whenNo linked issue

Measures:

  • avgIssueToDeployDays - Average from issue to deploy

Benchmarks:

LevelLead Time
EliteLess than 1 hour
HighBetween 1 hour and 1 day
MediumBetween 1 day and 1 week
LowMore than 1 week

What it measures: Percentage of production deployments that result in failure.

AspectValue
MeasurechangeFailureRate
NumeratorProduction deployments that failed
DenominatorCompleted production deployments (success + failure)
UnitPercentage

Related counts:

  • changeFailureCount - Number of failed production deployments
  • productionDeploymentCount - Total production deployments

What counts as a failure:

  • Build failures
  • Test failures
  • Deployment errors
  • Health check failures
  • Successful rollbacks (the initial deployment still failed)

Benchmarks:

LevelFailure Rate
Elite0-15%
High16-30%
Medium31-45%
Low46-60%

Note: This metric only counts completed deployments. In-progress deployments are excluded to avoid skewing the rate.


What it measures: How quickly the team recovers from a failure.

AspectValue
Start PointFirst failure status
End PointFirst success status
UnitHours
NULL whenNo failure occurred, or no recovery yet

Measures:

  • avgMttrHours - Average recovery time
  • medianMttrHours - Median recovery time (recommended)

Benchmarks:

LevelRecovery Time
EliteLess than 1 hour
HighLess than 1 day (24 hours)
MediumLess than 1 week
LowMore than 1 week

Note: MTTR measures how quickly the system recovers, not how quickly the root cause is fixed. A successful deployment indicates the system is working again.


MetricEliteHighMediumLow
Deployment FrequencyOn-demand (multiple/day)Weekly to dailyMonthly to weeklyMonthly to 6-monthly
Lead Time for Changes< 1 hour1 day - 1 week1 week - 1 month1 - 6 months
Change Failure Rate0-15%16-30%31-45%46-60%
MTTR< 1 hour< 1 day< 1 week> 1 week

Source: DORA State of DevOps Reports


Measures:
- productionDeploymentCount (Deployment Frequency)
- medianMergeToDeployMinutes (Lead Time)
- changeFailureRate (Change Failure Rate)
- medianMttrHours (MTTR)
Filter: isProduction = true
Time: Last 30/90 days

An elite team typically shows:

  • Deployment Frequency: > 1 per day
  • Lead Time: < 60 minutes
  • Change Failure Rate: < 15%
  • MTTR: < 60 minutes

When metrics are poor, prioritize improvements in this order:

  1. Change Failure Rate - Reduce failures first (quality)
  2. MTTR - Recover faster from failures (resilience)
  3. Lead Time - Speed up the pipeline (efficiency)
  4. Deployment Frequency - Deploy more often (throughput)

This order ensures you build stability before speed.


Q: Why is MTTR calculated from first failure to first success, not to resolution?

MTTR measures how quickly the system recovers, not how quickly the root cause is fixed. A successful deployment indicates the system is working again, even if follow-up work is needed.

Q: My change failure rate seems too high - what’s counted as a failure?

A deployment is counted as a failure if its status ever reaches ‘failure’ or ‘error’. This includes build failures, test failures, deployment errors, and health check failures. Successful rollbacks are still counted as failures (the initial deployment failed).

Q: Why does Merge to Deploy use minutes instead of hours?

Modern CI/CD pipelines often complete in minutes. Using minutes provides better granularity for elite teams while still being meaningful for slower pipelines.

Q: How is production determined?

The isProduction flag is set based on the environment name matching production patterns (e.g., “production”, “prod”, “prd”). Check your deployment configuration if unexpected.

Q: Can I get MTTR for specific failure types?

Currently, MTTR is calculated for all failures. Future versions may support filtering by failure type or error category.

Q: Why do some deployments show NULL for MTTR?

MTTR is only calculated for deployments that experienced a failure. Successful deployments (no failure at any point) have NULL MTTR - this is correct, not a data issue.