Skip to content

Surveys & Assessments Cubes

These cubes capture user feedback through two mechanisms: periodic surveys (following the SPACE framework) and session-specific assessments. Both provide qualitative insights that complement quantitative session metrics.

Source Table: survey_responses Description: Analytics for scheduled survey responses including SPACE framework questions for discovery and delivery teams.

DimensionTypeDescription
idstringResponse ID
surveyTypestringType of survey
userIdstringRespondent
completedAttimeCompletion timestamp
createdAttimeCreation timestamp
DimensionTypeDescription
taskHelpfulnessnumberAI tool helpfulness (1-7)
cognitiveLoadnumberMental effort required (1-7)
npsScorenumberNet Promoter Score (0-10)
deploymentConfidencenumberConfidence in deployments (1-5)
overallProductivitynumberProductivity rating (1-5)
jobSatisfactionnumberJob satisfaction (1-5)
aiToolImpactnumberAI tool impact (1-5)
teamCollaborationnumberCollaboration quality (1-5)
workLifeBalancenumberWork-life balance (1-5)
DimensionTypeDescription
discoverySatisfactionnumberSatisfaction with discovery process (1-5)
stakeholderConfidencenumberStakeholder confidence in decisions (1-5)
customerTouchpointFrequencynumberCustomer interactions per week
testedAssumptionsCountnumberAssumptions tested per sprint
crossFunctionalParticipationbooleanCross-team participation flag
insightUtilizationnumber% insights used in decisions
timeToFirstValidationDaysnumberDays to first validation
buildMeasureLearnCycleDaysnumberDays per BML cycle
DimensionTypeDescription
codeQualityConfidencenumberConfidence in code quality (1-5)
flowStateFrequencystringHow often in flow state
buildCicdSatisfactionnumberCI/CD satisfaction (1-5)
techDebtImpactnumberTech debt impact (1-5)
codeReviewQualitynumberReview quality (1-5)
codeReviewSpeednumberReview speed (1-5)
sprintCompletionConfidencenumberSprint confidence (1-5)
MeasureDescription
countTotal responses
MeasureDescription
avgProductivityAverage overall productivity
avgSatisfactionAverage job satisfaction
avgAiImpactAverage AI tool impact
avgTaskHelpfulnessAverage AI helpfulness
avgCognitiveLoadAverage cognitive load
avgCollaborationAverage team collaboration
avgWorkLifeBalanceAverage work-life balance
avgConfidenceAverage deployment confidence
MeasureDescription
avgNpsAverage NPS score
promoterCountPromoters (9-10)
passiveCountPassives (7-8)
detractorCountDetractors (0-6)
promoterPercentage% promoters
detractorPercentage% detractors
npsScoreCalculated NPS (promoter% - detractor%)
MeasureDescription
avgDiscoverySatisfactionAverage discovery satisfaction
avgStakeholderConfidenceAverage stakeholder confidence
avgCustomerTouchpointsAverage customer touchpoints/week
avgTestedAssumptionsAverage tested assumptions/sprint
crossFunctionalParticipationRate% with cross-functional participation
avgInsightUtilizationAverage insight utilization
avgTimeToFirstValidationAverage days to first validation
avgBuildMeasureLearnCycleAverage BML cycle days
MeasureDescription
avgCodeQualityConfidenceAverage code quality confidence
avgBuildCicdSatisfactionAverage CI/CD satisfaction
avgTechDebtImpactAverage tech debt impact
avgCodeReviewQualityAverage review quality
avgCodeReviewSpeedAverage review speed
avgSprintCompletionConfidenceAverage sprint confidence
MeasureDescription
avgDurationAverage completion time (seconds)
  • SPACE Framework Analysis: Track satisfaction, productivity, activity, collaboration, efficiency
  • Team Health Monitoring: Monitor job satisfaction and work-life balance
  • AI Tool ROI: Measure perceived AI impact on productivity
  • NPS Tracking: Monitor promoter/detractor trends
  • Discovery vs Delivery: Compare team-specific metrics

Source Table: session_assessments Description: User feedback and ratings for individual AI coding sessions, including detailed survey questions.

DimensionTypeDescription
idstringAssessment ID
sessionIdstringAssociated session
userIdstringRespondent
providerstringAI provider
ratingstringOverall rating (thumbs_up, meh, thumbs_down)
surveyTypestringSurvey type (short, standard, full)
completedAttimeCompletion timestamp
createdAttimeCreation timestamp
DimensionTypeDescription
taskHelpfulnessnumberHelpfulness rating (1-7)
effortImpactstringImpact on effort level
speedComparisonstringSpeed vs working alone
DimensionTypeDescription
verificationFrequencynumberHow often verified output (1-7)
deploymentConfidencenumberConfidence deploying output (1-5)
errorDetectabilitystringHow easy to spot errors
DimensionTypeDescription
cognitiveLoadnumberMental effort required (1-7)
mentalAlignmentnumberAI alignment with thinking (1-5)
userControlnumberFeeling of control (1-5)
DimensionTypeDescription
learningOutcomestringWhat was learned
understandingDepthstringUnderstanding of generated code
growthImpactstringImpact on skill growth
DimensionTypeDescription
npsScorenumberNet Promoter Score (0-10)
npsCategorystringPromoter/Passive/Detractor
taskEnjoymentstringEnjoyment level
stuckFrequencystringHow often got stuck
DimensionTypeDescription
aiPerceptionstringPerception of AI tool
pairProgrammingComparisonnumberComparison to pair programming
futurePreferencestringFuture usage preference
MeasureDescription
countTotal assessments
shortSurveyCountShort survey completions
standardSurveyCountStandard survey completions
fullSurveyCountFull survey completions
MeasureDescription
thumbsUpCountThumbs up count
mehCountMeh count
thumbsDownCountThumbs down count
thumbsUpPercentage% thumbs up
mehPercentage% meh
thumbsDownPercentage% thumbs down
avgRatingAverage rating (1-3 scale)
MeasureDescription
avgDurationAverage completion time (seconds)
maxDurationMaximum completion time
totalDurationTotal completion time
MeasureDescription
avgTaskHelpfulnessAverage helpfulness
avgVerificationFrequencyAverage verification frequency
avgDeploymentConfidenceAverage deployment confidence
avgCognitiveLoadAverage cognitive load
avgMentalAlignmentAverage mental alignment
avgUserControlAverage user control
avgPairProgrammingComparisonAverage pair programming comparison
MeasureDescription
avgNpsScoreAverage NPS score
promotersCountPromoters (9-10)
passivesCountPassives (7-8)
detractorsCountDetractors (0-6)
promoterPercentage% promoters
detractorPercentage% detractors
npsScoreCalculated NPS

These scores combine multiple responses into normalized 0-100 scales:

MeasureDescriptionCalculation
helpfulnessScoreOverall helpfulness(avgTaskHelpfulness / 7) * 100
accuracyScoreTrust/accuracy scoreCombines confidence + inverted verification
speedScoreSpeed perceptionWeighted speed comparison responses
clarityScoreCognitive clarityCombines inverted load + alignment
  • Session Quality: Track thumbs up/down trends
  • Provider Comparison: Compare ratings across AI providers
  • Survey Completion: Monitor short vs full survey rates
  • Quality Indicators: Use composite scores for dashboards
  • NPS Monitoring: Track promoter/detractor trends by provider

The Survey Responses cube implements the SPACE framework for developer productivity:

LetterDimensionDiscovery QuestionsDelivery Questions
SSatisfaction & WellbeingdiscoverySatisfactionjobSatisfaction
PPerformancestakeholderConfidencecodeQualityConfidence
AActivitycustomerTouchpoints, testedAssumptionsdeploymentFrequency
CCommunication & CollaborationcrossFunctionalParticipation, insightUtilizationcodeReviewQuality
EEfficiency & FlowtimeToFirstValidation, buildMeasureLearnCyclebuildCicdSatisfaction
JoinDescription
UsersRespondent
SurveyInstancesParent survey instance
TeamsVia user membership
JoinDescription
SessionsParent session
UsersRespondent
ProjectsViaSessionsProject via session
TeamsViaUsersTeam via user

Survey NPS by team:

measures: [SurveyResponses.npsScore, SurveyResponses.count]
dimensions: [Teams.name]

Assessment ratings by provider:

measures: [Assessments.thumbsUpPercentage, Assessments.avgRating]
dimensions: [Assessments.provider]

SPACE metrics for discovery:

measures: [
SurveyResponses.avgDiscoverySatisfaction,
SurveyResponses.avgTimeToFirstValidation,
SurveyResponses.crossFunctionalParticipationRate
]
filters: [{ member: "SurveyResponses.surveyType", operator: "equals", values: ["discovery"] }]