DORA Metrics
DORA (DevOps Research and Assessment) metrics provide a standardized way to measure software delivery performance. GuideMode implements all four key DORA metrics through the Deployment Flow cube.
What is DORA?
Section titled “What is DORA?”DORA is a research program that identified four key metrics that distinguish high-performing technology organizations. These metrics measure both speed (deployment frequency, lead time) and stability (change failure rate, MTTR).
The Four Key Metrics
Section titled “The Four Key Metrics”| DORA Metric | GuideMode Measure | Source Cube |
|---|---|---|
| Deployment Frequency | productionDeploymentCount | Deployment Flow |
| Lead Time for Changes | avgMergeToDeployMinutes | Deployment Flow |
| Change Failure Rate | changeFailureRate | Deployment Flow |
| Mean Time to Recovery | medianMttrHours | Deployment Flow |
Deployment Frequency
Section titled “Deployment Frequency”What it measures: How often code deploys to production.
| Measure | Description |
|---|---|
count | Total deployments (all environments) |
productionDeploymentCount | Production deployments only |
successfulProductionDeploymentCount | Successful production deployments |
How to calculate: Filter by time dimension (day, week, month) to calculate frequency.
Benchmarks:
| Level | Frequency |
|---|---|
| Elite | Multiple deploys per day |
| High | Between once per day and once per week |
| Medium | Between once per week and once per month |
| Low | Between once per month and once every six months |
Lead Time for Changes
Section titled “Lead Time for Changes”What it measures: Time from code commit to production deployment.
GuideMode provides two lead time measures:
Merge to Deploy (Pipeline Lead Time)
Section titled “Merge to Deploy (Pipeline Lead Time)”| Aspect | Value |
|---|---|
| Start Point | PR merged |
| End Point | Deployment created |
| Unit | Minutes |
| NULL when | No linked merged PR |
Measures:
avgMergeToDeployMinutes- Average pipeline timemedianMergeToDeployMinutes- Median pipeline time (recommended)
Issue to Deploy (Full Lead Time)
Section titled “Issue to Deploy (Full Lead Time)”| Aspect | Value |
|---|---|
| Start Point | Issue created |
| End Point | Deployment created |
| Unit | Days |
| NULL when | No linked issue |
Measures:
avgIssueToDeployDays- Average from issue to deploy
Benchmarks:
| Level | Lead Time |
|---|---|
| Elite | Less than 1 hour |
| High | Between 1 hour and 1 day |
| Medium | Between 1 day and 1 week |
| Low | More than 1 week |
Change Failure Rate
Section titled “Change Failure Rate”What it measures: Percentage of production deployments that result in failure.
| Aspect | Value |
|---|---|
| Measure | changeFailureRate |
| Numerator | Production deployments that failed |
| Denominator | Completed production deployments (success + failure) |
| Unit | Percentage |
Related counts:
changeFailureCount- Number of failed production deploymentsproductionDeploymentCount- Total production deployments
What counts as a failure:
- Build failures
- Test failures
- Deployment errors
- Health check failures
- Successful rollbacks (the initial deployment still failed)
Benchmarks:
| Level | Failure Rate |
|---|---|
| Elite | 0-15% |
| High | 16-30% |
| Medium | 31-45% |
| Low | 46-60% |
Note: This metric only counts completed deployments. In-progress deployments are excluded to avoid skewing the rate.
Mean Time to Recovery (MTTR)
Section titled “Mean Time to Recovery (MTTR)”What it measures: How quickly the team recovers from a failure.
| Aspect | Value |
|---|---|
| Start Point | First failure status |
| End Point | First success status |
| Unit | Hours |
| NULL when | No failure occurred, or no recovery yet |
Measures:
avgMttrHours- Average recovery timemedianMttrHours- Median recovery time (recommended)
Benchmarks:
| Level | Recovery Time |
|---|---|
| Elite | Less than 1 hour |
| High | Less than 1 day (24 hours) |
| Medium | Less than 1 week |
| Low | More than 1 week |
Note: MTTR measures how quickly the system recovers, not how quickly the root cause is fixed. A successful deployment indicates the system is working again.
Industry Benchmarks Summary
Section titled “Industry Benchmarks Summary”| Metric | Elite | High | Medium | Low |
|---|---|---|---|---|
| Deployment Frequency | On-demand (multiple/day) | Weekly to daily | Monthly to weekly | Monthly to 6-monthly |
| Lead Time for Changes | < 1 hour | 1 day - 1 week | 1 week - 1 month | 1 - 6 months |
| Change Failure Rate | 0-15% | 16-30% | 31-45% | 46-60% |
| MTTR | < 1 hour | < 1 day | < 1 week | > 1 week |
Source: DORA State of DevOps Reports
Building a DORA Dashboard
Section titled “Building a DORA Dashboard”Basic DORA Dashboard
Section titled “Basic DORA Dashboard”Measures:- productionDeploymentCount (Deployment Frequency)- medianMergeToDeployMinutes (Lead Time)- changeFailureRate (Change Failure Rate)- medianMttrHours (MTTR)
Filter: isProduction = trueTime: Last 30/90 daysElite Team Indicators
Section titled “Elite Team Indicators”An elite team typically shows:
- Deployment Frequency: > 1 per day
- Lead Time: < 60 minutes
- Change Failure Rate: < 15%
- MTTR: < 60 minutes
Improvement Priorities
Section titled “Improvement Priorities”When metrics are poor, prioritize improvements in this order:
- Change Failure Rate - Reduce failures first (quality)
- MTTR - Recover faster from failures (resilience)
- Lead Time - Speed up the pipeline (efficiency)
- Deployment Frequency - Deploy more often (throughput)
This order ensures you build stability before speed.
Q: Why is MTTR calculated from first failure to first success, not to resolution?
MTTR measures how quickly the system recovers, not how quickly the root cause is fixed. A successful deployment indicates the system is working again, even if follow-up work is needed.
Q: My change failure rate seems too high - what’s counted as a failure?
A deployment is counted as a failure if its status ever reaches ‘failure’ or ‘error’. This includes build failures, test failures, deployment errors, and health check failures. Successful rollbacks are still counted as failures (the initial deployment failed).
Q: Why does Merge to Deploy use minutes instead of hours?
Modern CI/CD pipelines often complete in minutes. Using minutes provides better granularity for elite teams while still being meaningful for slower pipelines.
Q: How is production determined?
The isProduction flag is set based on the environment name matching production patterns (e.g., “production”, “prod”, “prd”). Check your deployment configuration if unexpected.
Q: Can I get MTTR for specific failure types?
Currently, MTTR is calculated for all failures. Future versions may support filtering by failure type or error category.
Q: Why do some deployments show NULL for MTTR?
MTTR is only calculated for deployments that experienced a failure. Successful deployments (no failure at any point) have NULL MTTR - this is correct, not a data issue.
Related
Section titled “Related”- Deployment Flow - Complete deployment cube reference
- Delivery Flow - Work item delivery metrics
- SPACE Framework - Alternative productivity framework