Perceived Outcomes & Satisfaction
Why It Matters
These don't measure maturity directly, but they indicate whether Copilot is delivering value. DORA 2025: AI adoption now positively correlates with throughput (unlike 2024) but still negatively with stability.
State of AI-Assisted Software Development 2025
The 2025 DORA report introduces the 'AI Capabilities Model' identifying seven practices that amplify AI benefits. The core insight is that AI is an 'amplifier' - it magnifies existing organizational strengths AND weaknesses. Key stats: 89% of orgs prioritizing AI, 76% of devs using daily, but 39% have low trust. The trust research is critical: developers who trust AI more are more productive, but trust must be earned through organizational support (policies, training time, addressing concerns). The 451% adoption increase from acceptable-use policies is remarkable - clarity enables adoption.
- 89% of organizations prioritizing AI integration into applications
- 76% of technologists rely on AI for parts of their daily work
- 75% of developers report positive productivity impact from AI
The SPACE of Developer Productivity: There's more to it than you think
The SPACE framework defines five dimensions of developer productivity: Satisfaction and wellbeing, Performance, Activity, Communication and collaboration, and Efficiency and flow. No single metric captures productivity - organizations must measure across dimensions. While pre-dating AI tools (2021), this framework is foundational for measuring AI's actual impact on development. Key for outcomes dimension.
- Five dimensions of productivity: Satisfaction, Performance, Activity, Communication, Efficiency
- No single metric captures productivity
Learn More
Assessment Questions (8)
Maximum possible score: 38 points
○ Q1 likert 5 pts
Copilot helps me complete tasks faster.
Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity
This is the most rigorous 2025 study on AI coding productivity. The RCT methodology (16 experienced developers, 246 tasks, $150/hr compensation) makes this highly credible. The 39-44 percentage point gap between perceived and actual productivity is the key insight for our trust_calibration dimension. This directly supports recommendations about not over-trusting AI suggestions and maintaining verification practices.
- Experienced developers were 19% slower with AI
- Developers perceived 20% speedup (39-44 percentage point gap)
- Self-reported productivity may not reflect reality
Research: Quantifying GitHub Copilot's impact in the enterprise with Accenture
This is the primary source for the 30% acceptance rate benchmark and the 88% code retention statistic. The 95% enjoyment and 90% fulfillment stats are powerful for adoption justification. The 84% increase in successful builds directly supports the claim that AI doesn't sacrifice quality for speed. Published May 2024, so represents mature Copilot usage patterns.
- 95% of developers said they enjoyed coding more with GitHub Copilot
- 90% of developers felt more fulfilled with their jobs when using GitHub Copilot
- Developers accepted around 30% of GitHub Copilot's suggestions
○ Q2 likert 5 pts
Copilot helps me stay in flow when coding.
State of AI-Assisted Software Development 2025
The 2025 DORA report introduces the 'AI Capabilities Model' identifying seven practices that amplify AI benefits. The core insight is that AI is an 'amplifier' - it magnifies existing organizational strengths AND weaknesses. Key stats: 89% of orgs prioritizing AI, 76% of devs using daily, but 39% have low trust. The trust research is critical: developers who trust AI more are more productive, but trust must be earned through organizational support (policies, training time, addressing concerns). The 451% adoption increase from acceptable-use policies is remarkable - clarity enables adoption.
- 89% of organizations prioritizing AI integration into applications
- 76% of technologists rely on AI for parts of their daily work
- 75% of developers report positive productivity impact from AI
Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity
This is the most rigorous 2025 study on AI coding productivity. The RCT methodology (16 experienced developers, 246 tasks, $150/hr compensation) makes this highly credible. The 39-44 percentage point gap between perceived and actual productivity is the key insight for our trust_calibration dimension. This directly supports recommendations about not over-trusting AI suggestions and maintaining verification practices.
- Experienced developers were 19% slower with AI
- Developers perceived 20% speedup (39-44 percentage point gap)
- Self-reported productivity may not reflect reality
○ Q3 single choice 4 pts
Compared to coding without Copilot, how much time do you estimate you save per week?
Note: JetBrains 2025: 19% of developers save 8+ hours/week (up from 9% in 2024). 6+ hours indicates power user territory.
Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity
This is the most rigorous 2025 study on AI coding productivity. The RCT methodology (16 experienced developers, 246 tasks, $150/hr compensation) makes this highly credible. The 39-44 percentage point gap between perceived and actual productivity is the key insight for our trust_calibration dimension. This directly supports recommendations about not over-trusting AI suggestions and maintaining verification practices.
- Experienced developers were 19% slower with AI
- Developers perceived 20% speedup (39-44 percentage point gap)
- Self-reported productivity may not reflect reality
The State of Developer Ecosystem 2025
The 2025 JetBrains survey of 24,534 developers shows AI tools have become mainstream (85% regular usage). The 68% expecting AI proficiency to become a job requirement is critical for skill development. The finding that 19% save 8+ hours/week (up from 9% in 2024) shows productivity gains are real for power users. Key insight: developers want AI for mundane tasks but want control of creative/complex work.
- 85% of developers regularly use AI tools for coding and development
- 62% use at least one AI coding assistant, agent, or code editor
- 68% expect AI proficiency will become a job requirement
○ Q4 likert 5 pts
I enjoy my work more when using Copilot.
State of AI-Assisted Software Development 2025
The 2025 DORA report introduces the 'AI Capabilities Model' identifying seven practices that amplify AI benefits. The core insight is that AI is an 'amplifier' - it magnifies existing organizational strengths AND weaknesses. Key stats: 89% of orgs prioritizing AI, 76% of devs using daily, but 39% have low trust. The trust research is critical: developers who trust AI more are more productive, but trust must be earned through organizational support (policies, training time, addressing concerns). The 451% adoption increase from acceptable-use policies is remarkable - clarity enables adoption.
- 89% of organizations prioritizing AI integration into applications
- 76% of technologists rely on AI for parts of their daily work
- 75% of developers report positive productivity impact from AI
State of AI Code Quality 2025
This is the most comprehensive 2025 survey on AI code quality (609 developers). The key insight is the 'Confidence Flywheel' - context-rich suggestions reduce hallucinations, which improves quality, which builds trust. The finding that 80% of PRs don't receive human review when AI tools are enabled is critical for our agentic_supervision dimension. NOTE: The previously cited 1.7x issue rate and 41% commit stats were not found in the current report.
- 82% of developers use AI coding tools daily or weekly
- 65% of developers say at least a quarter of each commit is AI-generated
- 59% say AI has improved code quality
○ Q5 likert 5 pts
The code I produce with Copilot is high quality.
State of AI vs Human Code Generation Report
This is the most rigorous empirical comparison of AI vs human code quality to date. The 1.7x issue rate and specific vulnerability multipliers (2.74x XSS, 1.88x password handling) are critical for trust_calibration recommendations. Key insight: AI makes the same kinds of mistakes humans do, just more often at larger scale. The 8x I/O performance issue rate shows AI favors simple patterns over efficiency.
- AI-generated PRs contain 1.7x more issues overall (10.83 vs 6.45 issues per PR)
- AI PRs show 1.4-1.7x more critical and major issues
- Logic and correctness issues 75% more common in AI PRs
○ Q6 single choice 5 pts
Have you ever measured your actual productivity with vs. without AI tools?
Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity
This is the most rigorous 2025 study on AI coding productivity. The RCT methodology (16 experienced developers, 246 tasks, $150/hr compensation) makes this highly credible. The 39-44 percentage point gap between perceived and actual productivity is the key insight for our trust_calibration dimension. This directly supports recommendations about not over-trusting AI suggestions and maintaining verification practices.
- Experienced developers were 19% slower with AI
- Developers perceived 20% speedup (39-44 percentage point gap)
- Self-reported productivity may not reflect reality
○ Q7 single choice 5 pts
Does your organization track business ROI from AI coding tools (not just usage)?
Note: McKinsey 2025: 88% use AI but only 39% see EBIT impact. Most organizations haven't connected AI adoption to business outcomes.
The State of AI in 2025: Agents, Innovation, and Transformation
McKinsey's 2025 survey shows AI use is common (88%) but enterprise value capture is rare (only 39% see EBIT impact). The key differentiator is workflow redesign - high performers are 3x more likely to fundamentally redesign workflows. The 62% experimenting with agents stat is critical for agentic_supervision. Key insight: most organizations are still in pilots, not scaled adoption.
- 88% report regular AI use in at least one business function (up from 78%)
- Nearly two-thirds still in experimentation or piloting phases
- 62% experimenting with AI agents; 23% scaling agents
○ Q8 single choice 4 pts
Has your trust in AI coding tools changed over the past 12 months?
Note: Stack Overflow 2025: Positive sentiment dropped from 70%+ to 60%. Calibrated trust (stable or slightly cautious) indicates maturity. Both extremes (burned or heavy reliance) may indicate issues.
2025 Stack Overflow Developer Survey
2025 Stack Overflow survey shows continued adoption (84%) but declining trust (60% positive, down from 70%+). Key insight: developers are using AI more but trusting it less. 35% use Stack Overflow as fallback when AI fails.
- 84% of respondents using or planning to use AI tools (up from 76% in 2024)
- 51% of professional developers use AI tools daily
- Positive sentiment dropped from 70%+ (2023-2024) to 60% (2025)