These cubes capture user feedback through two mechanisms: periodic surveys (following the SPACE framework) and session-specific assessments. Both provide qualitative insights that complement quantitative session metrics.
Source Table: survey_responses
Description: Analytics for scheduled survey responses including SPACE framework questions for discovery and delivery teams.
Dimension Type Description idstring Response ID surveyTypestring Type of survey userIdstring Respondent completedAttime Completion timestamp createdAttime Creation timestamp
Dimension Type Description taskHelpfulnessnumber AI tool helpfulness (1-7) cognitiveLoadnumber Mental effort required (1-7) npsScorenumber Net Promoter Score (0-10) deploymentConfidencenumber Confidence in deployments (1-5) overallProductivitynumber Productivity rating (1-5) jobSatisfactionnumber Job satisfaction (1-5) aiToolImpactnumber AI tool impact (1-5) teamCollaborationnumber Collaboration quality (1-5) workLifeBalancenumber Work-life balance (1-5)
Dimension Type Description discoverySatisfactionnumber Satisfaction with discovery process (1-5) stakeholderConfidencenumber Stakeholder confidence in decisions (1-5) customerTouchpointFrequencynumber Customer interactions per week testedAssumptionsCountnumber Assumptions tested per sprint crossFunctionalParticipationboolean Cross-team participation flag insightUtilizationnumber % insights used in decisions timeToFirstValidationDaysnumber Days to first validation buildMeasureLearnCycleDaysnumber Days per BML cycle
Dimension Type Description codeQualityConfidencenumber Confidence in code quality (1-5) flowStateFrequencystring How often in flow state buildCicdSatisfactionnumber CI/CD satisfaction (1-5) techDebtImpactnumber Tech debt impact (1-5) codeReviewQualitynumber Review quality (1-5) codeReviewSpeednumber Review speed (1-5) sprintCompletionConfidencenumber Sprint confidence (1-5)
Measure Description countTotal responses
Measure Description avgProductivityAverage overall productivity avgSatisfactionAverage job satisfaction avgAiImpactAverage AI tool impact avgTaskHelpfulnessAverage AI helpfulness avgCognitiveLoadAverage cognitive load avgCollaborationAverage team collaboration avgWorkLifeBalanceAverage work-life balance avgConfidenceAverage deployment confidence
Measure Description avgNpsAverage NPS score promoterCountPromoters (9-10) passiveCountPassives (7-8) detractorCountDetractors (0-6) promoterPercentage% promoters detractorPercentage% detractors npsScoreCalculated NPS (promoter% - detractor%)
Measure Description avgDiscoverySatisfactionAverage discovery satisfaction avgStakeholderConfidenceAverage stakeholder confidence avgCustomerTouchpointsAverage customer touchpoints/week avgTestedAssumptionsAverage tested assumptions/sprint crossFunctionalParticipationRate% with cross-functional participation avgInsightUtilizationAverage insight utilization avgTimeToFirstValidationAverage days to first validation avgBuildMeasureLearnCycleAverage BML cycle days
Measure Description avgCodeQualityConfidenceAverage code quality confidence avgBuildCicdSatisfactionAverage CI/CD satisfaction avgTechDebtImpactAverage tech debt impact avgCodeReviewQualityAverage review quality avgCodeReviewSpeedAverage review speed avgSprintCompletionConfidenceAverage sprint confidence
Measure Description avgDurationAverage completion time (seconds)
SPACE Framework Analysis: Track satisfaction, productivity, activity, collaboration, efficiency
Team Health Monitoring: Monitor job satisfaction and work-life balance
AI Tool ROI: Measure perceived AI impact on productivity
NPS Tracking: Monitor promoter/detractor trends
Discovery vs Delivery: Compare team-specific metrics
Source Table: session_assessments
Description: User feedback and ratings for individual AI coding sessions, including detailed survey questions.
Dimension Type Description idstring Assessment ID sessionIdstring Associated session userIdstring Respondent providerstring AI provider ratingstring Overall rating (thumbs_up, meh, thumbs_down) surveyTypestring Survey type (short, standard, full) completedAttime Completion timestamp createdAttime Creation timestamp
Dimension Type Description taskHelpfulnessnumber Helpfulness rating (1-7) effortImpactstring Impact on effort level speedComparisonstring Speed vs working alone
Dimension Type Description verificationFrequencynumber How often verified output (1-7) deploymentConfidencenumber Confidence deploying output (1-5) errorDetectabilitystring How easy to spot errors
Dimension Type Description cognitiveLoadnumber Mental effort required (1-7) mentalAlignmentnumber AI alignment with thinking (1-5) userControlnumber Feeling of control (1-5)
Dimension Type Description learningOutcomestring What was learned understandingDepthstring Understanding of generated code growthImpactstring Impact on skill growth
Dimension Type Description npsScorenumber Net Promoter Score (0-10) npsCategorystring Promoter/Passive/Detractor taskEnjoymentstring Enjoyment level stuckFrequencystring How often got stuck
Dimension Type Description aiPerceptionstring Perception of AI tool pairProgrammingComparisonnumber Comparison to pair programming futurePreferencestring Future usage preference
Measure Description countTotal assessments shortSurveyCountShort survey completions standardSurveyCountStandard survey completions fullSurveyCountFull survey completions
Measure Description thumbsUpCountThumbs up count mehCountMeh count thumbsDownCountThumbs down count thumbsUpPercentage% thumbs up mehPercentage% meh thumbsDownPercentage% thumbs down avgRatingAverage rating (1-3 scale)
Measure Description avgDurationAverage completion time (seconds) maxDurationMaximum completion time totalDurationTotal completion time
Measure Description avgTaskHelpfulnessAverage helpfulness avgVerificationFrequencyAverage verification frequency avgDeploymentConfidenceAverage deployment confidence avgCognitiveLoadAverage cognitive load avgMentalAlignmentAverage mental alignment avgUserControlAverage user control avgPairProgrammingComparisonAverage pair programming comparison
Measure Description avgNpsScoreAverage NPS score promotersCountPromoters (9-10) passivesCountPassives (7-8) detractorsCountDetractors (0-6) promoterPercentage% promoters detractorPercentage% detractors npsScoreCalculated NPS
These scores combine multiple responses into normalized 0-100 scales:
Measure Description Calculation helpfulnessScoreOverall helpfulness (avgTaskHelpfulness / 7) * 100accuracyScoreTrust/accuracy score Combines confidence + inverted verification speedScoreSpeed perception Weighted speed comparison responses clarityScoreCognitive clarity Combines inverted load + alignment
Session Quality: Track thumbs up/down trends
Provider Comparison: Compare ratings across AI providers
Survey Completion: Monitor short vs full survey rates
Quality Indicators: Use composite scores for dashboards
NPS Monitoring: Track promoter/detractor trends by provider
The Survey Responses cube implements the SPACE framework for developer productivity:
Letter Dimension Discovery Questions Delivery Questions S Satisfaction & Wellbeing discoverySatisfaction jobSatisfaction P Performance stakeholderConfidence codeQualityConfidence A Activity customerTouchpoints, testedAssumptions deploymentFrequency C Communication & Collaboration crossFunctionalParticipation, insightUtilization codeReviewQuality E Efficiency & Flow timeToFirstValidation, buildMeasureLearnCycle buildCicdSatisfaction
Join Description UsersRespondent SurveyInstancesParent survey instance TeamsVia user membership
Join Description SessionsParent session UsersRespondent ProjectsViaSessionsProject via session TeamsViaUsersTeam via user
Survey NPS by team:
measures: [SurveyResponses.npsScore, SurveyResponses.count]
Assessment ratings by provider:
measures: [Assessments.thumbsUpPercentage, Assessments.avgRating]
dimensions: [Assessments.provider]
SPACE metrics for discovery:
SurveyResponses.avgDiscoverySatisfaction,
SurveyResponses.avgTimeToFirstValidation,
SurveyResponses.crossFunctionalParticipationRate
filters: [{ member: "SurveyResponses.surveyType", operator: "equals", values: ["discovery"] }]