Skip to content

Analytics Overview

GuideMode provides comprehensive analytics through three interconnected flow cubes that track work from initial discovery through to production deployment.

GuideMode organizes work tracking into three distinct flows, each optimized for different analytical needs:

FlowPurposeGrainKey Question
Discovery FlowResearch validationOne row per discoveryHow effective is our research process?
Delivery FlowWork item deliveryOne row per issueHow efficiently do we deliver features?
Deployment FlowDeployment analyticsOne row per deploymentHow reliable is our deployment pipeline?
Discovery Issues → Delivery Issues → Pull Requests → Deployments
↓ ↓ ↓ ↓
Discovery Flow Delivery Flow (via links) Deployment Flow

Discovery Flow tracks research and validation work before implementation begins. When a discovery is validated, it may generate feature issues.

Delivery Flow tracks the implementation of features, bugs, chores, and incidents from creation through to production.

Deployment Flow focuses specifically on the deployment pipeline, measuring deployment timing, reliability, and recovery metrics.

All flow analytics are powered by PostgreSQL materialized views (fact tables) that pre-compute metrics for fast dashboard performance:

  • 5-10x faster queries compared to live table aggregations
  • Consistent metrics across all dashboards
  • Automatic refresh every 15-60 minutes via scheduled jobs
Fact TableRefresh Interval
discovery_flow_factsEvery 60 minutes
delivery_flow_factsEvery 60 minutes
deployment_flow_factsEvery 15 minutes

Metrics may lag behind real-time by up to one refresh interval. For the most critical operational dashboards (DORA metrics), deployment data refreshes more frequently.

All flow cubes calculate duration metrics in a consistent way:

  1. Duration stored in seconds - Raw durations are calculated using PostgreSQL’s EXTRACT(EPOCH FROM ...) function
  2. Displayed in human-readable units - Cubes convert to days, hours, or minutes as appropriate
  3. Statistical aggregations - Most durations offer average, median, and P90 measures
  4. NULL handling - Durations are NULL when the end event hasn’t occurred yet

Each flow tracks specific timestamps and calculates durations between them:

Discovery Flow timestamps:

  • discoveryCreatedAt - When the discovery issue was created
  • discoveryClosedAt - When the discovery was closed
  • firstPrCreatedAt - When the first implementation PR was created
  • firstProductionDeployedAt - When code first reached production

Delivery Flow timestamps:

  • issueCreatedAt - When the work item was created
  • issueClosedAt - When the work item was closed
  • firstPrCreatedAt - When the first PR was created
  • firstPrMergedAt - When the first PR was merged
  • firstProductionSuccessAt - When the code successfully deployed to production

Deployment Flow timestamps:

  • deploymentCreatedAt - When the deployment was triggered
  • firstFailureAt - When the first failure occurred (if any)
  • firstSuccessAt - When the deployment first succeeded
  • Evaluating research effectiveness
  • Measuring validation rates (discoveries → features)
  • Analyzing the pipeline from research to production
  • Identifying bottlenecks between discovery and implementation
  • Tracking sprint/cycle performance
  • Measuring team velocity and throughput
  • Analyzing code review efficiency
  • Understanding planned vs unplanned work ratios
  • Measuring DORA metrics (deployment frequency, change failure rate, MTTR)
  • Analyzing deployment pipeline reliability
  • Identifying production stability issues
  • Benchmarking against industry standards

Q: Why are some metrics NULL? A: Duration metrics require both start and end timestamps. If work hasn’t reached a stage (e.g., no PR created yet), the duration to that stage will be NULL.

Q: How often do metrics update? A: Fact tables refresh on a schedule (15-60 minutes). Real-time data requires querying the live tables directly.

Q: Can I filter by team? A: Yes, all flow cubes support filtering by team via the Teams dimension. Issues and deployments are associated with teams through their projects.

Q: Why do average and median differ significantly? A: A few outliers can skew averages dramatically. Median is often more representative of “typical” performance. We recommend using median for most analyses.