Skip to content

Discovery Flow Metrics

The Discovery Flow tracks research and validation work from initial discovery through to production deployment. Use these metrics to understand how effectively your team validates ideas before implementation.

Grain: One row per discovery issue Refresh: Every 60 minutes Primary Use Case: Research process effectiveness and idea validation

Discovery issues represent research or investigation work that may lead to validated features. The Discovery Flow tracks:

  • How long discoveries take to validate
  • What percentage of discoveries result in features
  • How quickly validated work moves to production

All duration measures are calculated from specific timestamps. Understanding exactly what each measures is crucial for correct interpretation.

What it measures: How long the discovery phase took from start to finish.

AspectValue
Start PointDiscovery issue created
End PointDiscovery issue closed
UnitDays
NULL whenDiscovery not yet closed

Available aggregations:

  • avgDiscoveryDurationDays - Average across all discoveries
  • medianDiscoveryDurationDays - Median (recommended for typical performance)

Interpretation:

  • Lower is generally better - Faster validation cycles
  • Very low values may indicate insufficient research
  • Very high values may indicate scope creep or blocked work

Typical ranges:

  • Quick spikes: 1-3 days
  • Standard research: 5-10 days
  • Deep investigation: 2-4 weeks

What it measures: Time between completing discovery and starting implementation.

AspectValue
Start PointDiscovery issue closed
End PointFirst PR created
UnitDays
NULL whenDiscovery not closed, or no PR created yet

Available aggregations:

  • avgDiscoveryToPrDays - Average time to start implementation
  • medianDiscoveryToPrDays - Median time (recommended)

Interpretation:

  • Measures handoff efficiency between research and implementation
  • High values indicate bottlenecks in work prioritization
  • Negative values are possible if PR work starts before discovery closes

Typical ranges:

  • Well-prioritized: 0-2 days
  • Normal backlog: 3-7 days
  • Backlog issues: 2+ weeks

What it measures: Time from completing discovery to reaching production.

AspectValue
Start PointDiscovery issue closed
End PointFirst production deployment deployed
UnitDays
NULL whenDiscovery not closed, or no production deployment yet

Available aggregations:

  • avgDiscoveryToProductionDays - Average time to production
  • medianDiscoveryToProductionDays - Median time (recommended)

Interpretation:

  • End-to-end implementation time from validated idea to production
  • Includes PR creation, review, merge, and deployment
  • High values indicate slow delivery pipeline or complex implementations

Typical ranges:

  • Fast delivery: 3-7 days
  • Standard delivery: 1-3 weeks
  • Complex features: 1-2 months

What it measures: Complete end-to-end time from discovery creation to production.

AspectValue
Start PointDiscovery issue created
End PointFirst production deployment deployed
UnitDays
NULL whenNo production deployment yet

Available aggregations:

  • avgTotalLeadTimeDays - Average total lead time
  • medianTotalLeadTimeDays - Median (uses successful deployments)
  • p90TotalLeadTimeDays - 90th percentile (for capacity planning)

Interpretation:

  • Most comprehensive metric for idea-to-production time
  • Combines discovery duration + implementation time
  • P90 is useful for setting expectations with stakeholders

Typical ranges:

  • High-performing: 1-2 weeks
  • Standard: 2-4 weeks
  • Needs improvement: 1+ months

Discoveries can have four validation states:

StatusMeaning
validatedDiscovery resulted in validated features
invalidatedDiscovery was rejected/not pursued
closed_unvalidatedClosed without explicit validation decision
in_progressStill open/being researched

Measure: validationRate

Percentage of completed discoveries that were validated into features.

validationRate = (validated_count / completed_count) * 100

Where completed_count = validated + invalidated + closed_unvalidated

Interpretation:

  • 50-70% is typical for healthy discovery processes
  • Very high rates (90%+) may indicate bias toward validation
  • Very low rates (below 30%) may indicate poor initial filtering
MeasureDescription
countTotal number of discoveries
validatedCountDiscoveries validated into features
invalidatedCountDiscoveries marked as invalid
closedUnvalidatedCountClosed without validation decision
inProgressCountStill being researched

Track how discoveries progress through the delivery pipeline:

Measure: discoveryToPrConversionRate

Percentage of discoveries that resulted in at least one PR.

discoveryToPrConversionRate = (discoveries_with_pr / total_discoveries) * 100

Interpretation:

  • Measures implementation rate of discoveries
  • Some discoveries may not require code changes (documentation, process changes)
  • Very low rates may indicate discoveries aren’t actionable

Measure: discoveryToMergeConversionRate

Percentage of discoveries with at least one merged PR.

Interpretation:

  • Measures completion rate through code review
  • Gap between “to PR” and “to merge” indicates review bottlenecks

Measure: discoveryToDeploymentConversionRate

Percentage of discoveries that reached successful production deployment.

Interpretation:

  • Ultimate success metric - validated ideas in production
  • Should track closely with merge rate for healthy pipelines

Measure: avgValidatedFeatureCount

Average number of features generated per discovery.

Interpretation:

  • Scope indicator - do discoveries spawn one feature or many?
  • High values may indicate discoveries are too broad
  • Value of 1.0-2.0 is typical

Measure: avgLinkedPrCount

Average number of PRs associated with each discovery.

Interpretation:

  • Implementation complexity indicator
  • High values suggest complex or poorly scoped discoveries

Filter discovery metrics by:

DimensionDescription
providerIssue tracking provider (e.g., Jira, Linear)
stateCurrent issue state
validationStatusvalidated, invalidated, closed_unvalidated, in_progress
isDiscoveryCompleteWhether discovery is closed
hasPrWhether any PR exists
hasMergedPrWhether any PR is merged
hasProductionDeploymentWhether deployed to production
hasSuccessfulProductionDeploymentWhether successfully deployed

Via Joins:

  • Projects.id / Projects.name - Filter by project
  • Teams.id / Teams.name - Filter by team
  • Users.id / Users.name - Filter by discovery author
Measures:
- validationRate (target: 50-70%)
- medianDiscoveryDurationDays (target: < 2 weeks)
- inProgressCount (watch for growing backlog)
Measures:
- medianDiscoveryToPrDays (target: < 1 week)
- discoveryToPrConversionRate (compare to validation rate)
Measures:
- medianTotalLeadTimeDays (primary KPI)
- p90TotalLeadTimeDays (for SLA planning)
- discoveryToDeploymentConversionRate (success rate)

Q: What’s the difference between “Discovery Duration” and “Discovery to Production”?

Discovery Duration measures only the research phase (created → closed). Discovery to Production measures the time after the discovery closes until code reaches production. Total Lead Time combines both.

Q: Why is Discovery to PR measured from closed, not created?

This measures handoff efficiency - how quickly the team starts implementation after validation. If measured from creation, it would conflate research time with implementation time.

Q: A discovery shows 0 days for “Discovery to PR” - is that wrong?

No, this indicates the PR was created on the same day the discovery closed. This is good! It means implementation started immediately after validation.

Q: Can Discovery to PR be negative?

Technically yes, if a PR is created before the discovery closes (parallel work). This is unusual but not necessarily wrong - sometimes implementation starts during validation.