Delivery Flow Metrics
The Delivery Flow tracks work items (features, bugs, chores, incidents) from creation through to production deployment. Use these metrics to understand team velocity, code review efficiency, and delivery pipeline performance.
Overview
Section titled “Overview”Grain: One row per feature/bug/chore/incident issue Refresh: Every 60 minutes Primary Use Case: Team velocity, cycle time analysis, and delivery efficiency
The Delivery Flow provides comprehensive metrics for:
- Issue lifecycle from creation to closure
- PR cycle time and code review efficiency
- Time from merge to production deployment
- Work type breakdown (planned vs unplanned)
Duration Measures
Section titled “Duration Measures”All duration measures track specific phases of the delivery lifecycle.
Issue Duration
Section titled “Issue Duration”What it measures: Total time an issue was open.
| Aspect | Value |
|---|---|
| Start Point | Issue created |
| End Point | Issue closed |
| Unit | Days |
| NULL when | Issue not yet closed |
Available aggregations:
avgIssueDurationDays- Average time issues are openmedianIssueDurationDays- Median time (recommended)
Interpretation:
- Measures overall cycle time from work identification to completion
- Includes all time: waiting, development, review, and deployment
- High values indicate blocked work or scope creep
Typical ranges:
- Quick fixes: 1-3 days
- Standard features: 1-2 weeks
- Complex features: 2-4 weeks
Time to First Response
Section titled “Time to First Response”What it measures: How quickly issues receive initial attention.
| Aspect | Value |
|---|---|
| Start Point | Issue created |
| End Point | First response (comment, assignment, etc.) |
| Unit | Days |
| NULL when | No response recorded |
Available aggregations:
avgTimeToFirstResponseDays- Average response time
Interpretation:
- Team responsiveness indicator
- Low values indicate good triage processes
- High values may indicate understaffing or poor notifications
Issue to PR
Section titled “Issue to PR”What it measures: Time to start coding after issue creation.
| Aspect | Value |
|---|---|
| Start Point | Issue created |
| End Point | First PR created |
| Unit | Days |
| NULL when | No PR created yet |
Available aggregations:
avgIssueToPrDays- Average time to start codingmedianIssueToPrDays- Median time (recommended)
Interpretation:
- Measures pickup time - how quickly work gets started
- Includes any waiting time before development begins
- Low values indicate good work prioritization
Typical ranges:
- Fast teams: 0-1 days
- Standard: 2-5 days
- Backlog heavy: 1+ weeks
PR Cycle Time
Section titled “PR Cycle Time”What it measures: Time from PR creation to merge.
| Aspect | Value |
|---|---|
| Start Point | First PR created |
| End Point | First PR merged |
| Unit | Days |
| NULL when | No PR merged yet |
Available aggregations:
avgPrCycleTimeDays- Average PR durationmedianPrCycleTimeDays- Median (recommended)
Interpretation:
- Code review efficiency metric
- Includes review time, feedback cycles, and CI/CD runs
- High values indicate review bottlenecks
Typical ranges:
- High-performing: < 1 day
- Standard: 1-3 days
- Needs improvement: > 1 week
PR to Review
Section titled “PR to Review”What it measures: Time waiting for first review.
| Aspect | Value |
|---|---|
| Start Point | PR created |
| End Point | First review submitted |
| Unit | Days |
| NULL when | No review yet |
Available aggregations:
avgPrToReviewDays- Average wait time for review
Interpretation:
- Review queue indicator
- High values indicate reviewer bottleneck
- Target: Same day or next business day
PR to Approval
Section titled “PR to Approval”What it measures: Time from PR creation to first approval.
| Aspect | Value |
|---|---|
| Start Point | PR created |
| End Point | First approval received |
| Unit | Days |
| NULL when | No approval yet |
Available aggregations:
avgPrToApprovalDays- Average time to approval
Interpretation:
- Total review cycle including feedback iterations
- Gap between “to review” and “to approval” = feedback cycle time
Merge to Deploy
Section titled “Merge to Deploy”What it measures: Time from code merge to production deployment.
| Aspect | Value |
|---|---|
| Start Point | First PR merged |
| End Point | First successful production deployment |
| Unit | Days |
| NULL when | No successful production deployment |
Available aggregations:
avgMergeToDeployDays- Average deployment pipeline timemedianMergeToDeployDays- Median (recommended)
Interpretation:
- Deployment pipeline efficiency
- Low values indicate good CI/CD practices
- High values may indicate manual deployment gates
Typical ranges:
- Continuous deployment: < 1 hour (shown as < 0.04 days)
- Daily deployments: < 1 day
- Weekly releases: 3-7 days
Total Lead Time
Section titled “Total Lead Time”What it measures: Complete time from issue creation to successful production deployment.
| Aspect | Value |
|---|---|
| Start Point | Issue created |
| End Point | First successful production deployment |
| Unit | Days |
| NULL when | No successful production deployment |
Available aggregations:
avgTotalLeadTimeDays- Average end-to-end lead timemedianTotalLeadTimeDays- Median (recommended)p90TotalLeadTimeDays- 90th percentile (for planning)
Interpretation:
- Most comprehensive delivery metric
- Combines all phases: waiting + development + review + deployment
- P90 useful for setting customer expectations
Typical ranges:
- Elite teams: 1-3 days
- High-performing: 1-2 weeks
- Standard: 2-4 weeks
- Needs improvement: 1+ months
Work Type Metrics
Section titled “Work Type Metrics”Issue Type Breakdown
Section titled “Issue Type Breakdown”| Measure | Description |
|---|---|
count | Total issues |
featureCount | Feature issues |
bugCount | Bug issues |
choreCount | Chore/maintenance issues |
incidentCount | Incident issues |
Planned vs Unplanned Work
Section titled “Planned vs Unplanned Work”| Measure | Description |
|---|---|
plannedWorkCount | Features + Chores (planned work) |
unplannedWorkCount | Bugs + Incidents (reactive work) |
plannedWorkPercentage | % of work that was planned |
Interpretation:
- Healthy teams: 70-80% planned work
- High unplanned work: May indicate quality issues or understaffing
- 100% planned: May indicate ignoring bugs/incidents
PR Size Metrics
Section titled “PR Size Metrics”Track code change size to identify review complexity:
| Measure | Description |
|---|---|
avgPrLinesChanged | Average lines (additions + deletions) per PR |
medianPrLinesChanged | Median lines changed |
avgPrChangedFiles | Average files modified per PR |
PR Size Categories (dimension prSize):
xs- Extra small (< 50 lines)s- Small (50-200 lines)m- Medium (200-500 lines)l- Large (500-1000 lines)xl- Extra large (> 1000 lines)
Best practice: Aim for smaller PRs (xs, s, m). Large PRs have longer review cycles and higher defect rates.
Discovery Origin Metrics
Section titled “Discovery Origin Metrics”Track work that originated from discovery research:
| Measure | Description |
|---|---|
fromDiscoveryCount | Issues created from validated discoveries |
fromDiscoveryPercentage | % of work from discovery process |
Interpretation:
- Measures how much work flows through the discovery process
- Higher percentages indicate more research-driven development
Completion Metrics
Section titled “Completion Metrics”Track delivery pipeline completion rates:
| Measure | Description |
|---|---|
withPrCount | Issues with at least one PR |
withMergedPrCount | Issues with merged PR |
withProductionDeploymentCount | Issues deployed to production |
withSuccessfulDeploymentCount | Issues with successful deployment |
Interpretation:
- Drop-off between stages indicates bottlenecks
- Example: High
withPrCountbut lowwithMergedPrCount= review bottleneck
Dimensions (Filters)
Section titled “Dimensions (Filters)”Filter delivery metrics by:
| Dimension | Description |
|---|---|
provider | Issue tracking provider |
type | feature, bug, chore, incident |
state | Current issue state |
prSize | xs, s, m, l, xl |
isPlannedWork | Features and chores |
isUnplannedWork | Bugs and incidents |
originatedFromDiscovery | Came from discovery process |
Status flags:
isIssueComplete- Issue is closedhasPr- Has at least one PRhasMergedPr- Has merged PRhasProductionDeployment- Deployed to productionhasSuccessfulProductionDeployment- Successfully deployed
Via Joins:
Projects.id/Projects.name- Filter by projectTeams.id/Teams.name- Filter by teamUsers.id/Users.name- Filter by issue author
Common Analysis Patterns
Section titled “Common Analysis Patterns”Team Velocity Dashboard
Section titled “Team Velocity Dashboard”Measures:- count (throughput)- medianTotalLeadTimeDays (cycle time)- featureCount vs bugCount (work mix)Code Review Efficiency
Section titled “Code Review Efficiency”Measures:- medianPrCycleTimeDays (overall review time)- avgPrToReviewDays (time to first review)- avgPrToApprovalDays (time to approval)- medianPrLinesChanged (PR size)Deployment Pipeline Health
Section titled “Deployment Pipeline Health”Measures:- medianMergeToDeployDays (pipeline speed)- withMergedPrCount vs withSuccessfulDeploymentCount (completion rate)Work Type Analysis
Section titled “Work Type Analysis”Measures:- plannedWorkPercentage (target: 70-80%)- bugCount trend over time- incidentCount (should be low)Q: What’s the difference between “Issue Duration” and “Total Lead Time”?
Issue Duration measures when the issue is open (created → closed). Total Lead Time measures time to production (created → deployed). An issue can close before deployment or deploy before closing.
Q: Why measure PR Cycle Time separately from Total Lead Time?
PR Cycle Time isolates the code review phase. If Total Lead Time is high but PR Cycle Time is low, the bottleneck is elsewhere (pickup time or deployment pipeline).
Q: What’s a good target for PR Cycle Time?
Industry benchmarks suggest < 24 hours for high-performing teams. However, context matters - security-critical code may require longer reviews.
Q: Why split planned vs unplanned work?
This split reveals team health. High unplanned work (bugs, incidents) indicates quality issues. Teams should track this ratio over time.
Q: Some issues show 0 Total Lead Time - is that accurate?
If a PR deploys on the same day the issue was created, the lead time rounds to 0 days. This indicates very fast delivery.
Related
Section titled “Related”- Analytics Overview - Understanding the three flows
- Discovery Flow - Track research validation
- Deployment Flow - Deployment pipeline metrics
- DORA Metrics - DevOps performance framework
- SPACE Framework - Developer productivity framework