Security-sensitive code, complex business logic, and regulated environments require careful AI usage. But 2025 update: complete 'AI avoidance' is being replaced by 'managed use with specialized tools'—security-tuned agents can audit AI-generated code.
October 2025 Update: GenAI Code Security Report
Veracode
This is the primary source for our 45% security vulnerability claim. The October 2025 update confirms that AI code security issues persist even with newer models. The finding that 'bigger models ≠ more secure code' is important for our model_routing dimension - it suggests security scanning is needed regardless of which model is used. The 72% Java-specific rate mentioned in our citations may be from the full PDF report.
Key Findings:
AI-generated code introduced risky security flaws in 45% of tests
100+ LLMs tested across Java, JavaScript, Python, and C#
AI Code Review and the Best AI Code Review Tools in 2025
Qodo
Comprehensive overview of AI code review tools and AI-reviewing-AI patterns. Key for agentic_supervision dimension - validates that AI reviewing AI is an emerging best practice.
Key Findings:
84% of developers now using AI tools, 41% of code is AI-generated
Leading AI review tools: CodeRabbit, Codacy Guardrails, Snyk DeepCode
AI-to-AI review is an emerging pattern (AI reviews AI-generated code)
HIPAA Security Rule Notice of Proposed Rulemaking to Strengthen Cybersecurity for Electronic Protected Health Information
HHS OCR (Office for Civil Rights)
IMPORTANT: This is a PROPOSED rule (NPRM), not a finalized regulation. Published December 27, 2024 with 60-day comment period. While it addresses cybersecurity broadly (encryption, MFA, audits), it does not specifically address AI coding tools. The relevance to our survey is about the broader compliance environment healthcare organizations must consider when using AI tools that might touch ePHI.
Key Findings:
PROPOSED rule (NPRM) - not yet finalized
Major update to strengthen cybersecurity for ePHI
Requires encryption of ePHI at rest and in transit
The 'Trust, But Verify' Pattern For AI-Assisted Engineering
This article provides the conceptual framework for our trust_calibration dimension. The three principles (Blind Trust is Vulnerability, Copilot Not Autopilot, Human Accountability Remains) directly inform our survey questions. The emphasis on verification over speed aligns with METR findings. Practical guidance includes starting conservatively with AI on low-stakes tasks.
Key Findings:
Blind trust in AI-generated code is a vulnerability
AI tools function as 'Copilot, Not Autopilot'
Human verification is the new development bottleneck
Are there specific types of code where you deliberately avoid or limit Copilot use?
[1]No - I use it for everything
[3]Yes - a few specific scenarios
[4]Yes - I have clear criteria for when not to use it
[5]Yes - and I use specialized security tools for sensitive areas
Note: Updated for 2025: 'Managed use with specialized tools' scores higher than blanket avoidance
October 2025 Update: GenAI Code Security Report
Veracode
This is the primary source for our 45% security vulnerability claim. The October 2025 update confirms that AI code security issues persist even with newer models. The finding that 'bigger models ≠ more secure code' is important for our model_routing dimension - it suggests security scanning is needed regardless of which model is used. The 72% Java-specific rate mentioned in our citations may be from the full PDF report.
Key Findings:
AI-generated code introduced risky security flaws in 45% of tests
100+ LLMs tested across Java, JavaScript, Python, and C#
The 2025 DORA report introduces the 'AI Capabilities Model' identifying seven practices that amplify AI benefits. The core insight is that AI is an 'amplifier' - it magnifies existing organizational strengths AND weaknesses. Key stats: 89% of orgs prioritizing AI, 76% of devs using daily, but 39% have low trust. The trust research is critical: developers who trust AI more are more productive, but trust must be earned through organizational support (policies, training time, addressing concerns). The 451% adoption increase from acceptable-use policies is remarkable - clarity enables adoption.
Key Findings:
89% of organizations prioritizing AI integration into applications
76% of technologists rely on AI for parts of their daily work
75% of developers report positive productivity impact from AI
[2]Code handling PHI, PII, or regulated data (HIPAA, GDPR)
[1]Complex business logic requiring domain expertise
[1]Performance-critical sections
[1]Novel algorithms or unique implementations
[1]Proprietary/confidential code patterns
[1]When I want to learn/understand deeply
[1]Agent mode for large-scope changes without review plan
HIPAA Security Rule Notice of Proposed Rulemaking to Strengthen Cybersecurity for Electronic Protected Health Information
HHS OCR (Office for Civil Rights)
IMPORTANT: This is a PROPOSED rule (NPRM), not a finalized regulation. Published December 27, 2024 with 60-day comment period. While it addresses cybersecurity broadly (encryption, MFA, audits), it does not specifically address AI coding tools. The relevance to our survey is about the broader compliance environment healthcare organizations must consider when using AI tools that might touch ePHI.
Key Findings:
PROPOSED rule (NPRM) - not yet finalized
Major update to strengthen cybersecurity for ePHI
Requires encryption of ePHI at rest and in transit
If you work in healthcare or with PHI: Are you aware of the HIPAA implications?
[0]N/A - I don't work with healthcare data
[0]No - I didn't know there were specific regulations
[1]Vaguely - I know there might be compliance issues
[3]Yes - I avoid AI tools for PHI-adjacent code
[5]Yes - we have specific policies and use HIPAA-compliant tools only
Note: GitHub Copilot is NOT HIPAA compliant. Microsoft 365 Copilot is covered under BAA with proper configuration.
Microsoft HIPAA/HITECH Compliance Documentation
Microsoft
Microsoft 365 Copilot is HIPAA compliant when organizations sign a Business Associate Agreement (BAA), but GitHub Copilot is explicitly NOT covered under BAA and cannot be used with Protected Health Information (PHI). This distinction is critical for healthcare developers - using non-compliant tools with PHI exposes organizations to regulatory penalties. Key for appropriate_nonuse dimension.
The EU AI Act establishes a comprehensive risk-based regulatory framework for AI systems, classifying them into prohibited, high-risk, and general-risk categories with varying compliance requirements. Enforcement begins in 2025, with organizations using AI coding tools needing to assess whether their implementations fall under high-risk categories. This regulation sets global precedent for AI governance and directly impacts how development teams can deploy AI-assisted development tools.
HIPAA Security Rule Notice of Proposed Rulemaking to Strengthen Cybersecurity for Electronic Protected Health Information
HHS OCR (Office for Civil Rights)
IMPORTANT: This is a PROPOSED rule (NPRM), not a finalized regulation. Published December 27, 2024 with 60-day comment period. While it addresses cybersecurity broadly (encryption, MFA, audits), it does not specifically address AI coding tools. The relevance to our survey is about the broader compliance environment healthcare organizations must consider when using AI tools that might touch ePHI.
Key Findings:
PROPOSED rule (NPRM) - not yet finalized
Major update to strengthen cybersecurity for ePHI
Requires encryption of ePHI at rest and in transit
If in a regulated industry: Does your organization have AI-specific compliance policies?
[0]No AI-specific policies exist
[1]General AI guidance but not regulation-specific
[3]Yes - policies aligned to our regulatory requirements
[4]Yes - policies reviewed by compliance/legal team
[5]Yes - with regular audits and certification alignment
Note: Regulated industries face heightened AI compliance requirements. EU AI Act enforcement begins 2025.
AI Act Implementation Timeline
European Parliament
EU AI Act enforcement began in 2025 with prohibited practices and GPAI rules. Full application by 2026. Critical for appropriate_nonuse and legal_compliance dimensions.
Key Findings:
EU AI Act entered into force August 1, 2024
Prohibited AI practices effective February 2, 2025
Do you work on safety-critical systems where AI code errors could cause physical harm?
[0]No - not safety-critical
[0]Yes - I use AI tools normally
[2]Yes - I use AI but with extra verification
[4]Yes - I avoid AI for safety-critical components
[5]Yes - AI prohibited by policy for safety-critical code
Note: Safety-critical systems (automotive, medical devices, aviation) require highest verification or AI prohibition.
October 2025 Update: GenAI Code Security Report
Veracode
This is the primary source for our 45% security vulnerability claim. The October 2025 update confirms that AI code security issues persist even with newer models. The finding that 'bigger models ≠ more secure code' is important for our model_routing dimension - it suggests security scanning is needed regardless of which model is used. The 72% Java-specific rate mentioned in our citations may be from the full PDF report.
Key Findings:
AI-generated code introduced risky security flaws in 45% of tests
100+ LLMs tested across Java, JavaScript, Python, and C#
The EU AI Act establishes a comprehensive risk-based regulatory framework for AI systems, classifying them into prohibited, high-risk, and general-risk categories with varying compliance requirements. Enforcement begins in 2025, with organizations using AI coding tools needing to assess whether their implementations fall under high-risk categories. This regulation sets global precedent for AI governance and directly impacts how development teams can deploy AI-assisted development tools.
How well do you understand Copilot's limitations for your specific tech stack and work?
[1]Not well - I'm not sure what it's bad at
[2]Somewhat - I know some general limitations
[3]Well - I know where it struggles in my context
[4]Very well - I could list specific failure patterns
Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity
METR
This is the most rigorous 2025 study on AI coding productivity. The RCT methodology (16 experienced developers, 246 tasks, $150/hr compensation) makes this highly credible. The 39-44 percentage point gap between perceived and actual productivity is the key insight for our trust_calibration dimension. This directly supports recommendations about not over-trusting AI suggestions and maintaining verification practices.
Key Findings:
Experienced developers were 19% slower with AI
Developers perceived 20% speedup (39-44 percentage point gap)
Self-reported productivity may not reflect reality
The 'Trust, But Verify' Pattern For AI-Assisted Engineering
This article provides the conceptual framework for our trust_calibration dimension. The three principles (Blind Trust is Vulnerability, Copilot Not Autopilot, Human Accountability Remains) directly inform our survey questions. The emphasis on verification over speed aligns with METR findings. Practical guidance includes starting conservatively with AI on low-stakes tasks.
Key Findings:
Blind trust in AI-generated code is a vulnerability
AI tools function as 'Copilot, Not Autopilot'
Human verification is the new development bottleneck
How do you ensure your coding skills don't atrophy from AI reliance?
[1]I sometimes code without AI deliberately
[1]I deeply review AI code to learn from it
[1]I practice fundamentals separately (leetcode, learning, etc.)
[1]I make sure I can explain any AI code I use
[2]We have 'AI-free' practice sessions or days
[0]I don't actively think about this
[0]I'm not concerned about skill atrophy
Note: Addy Osmani's research shows skill atrophy is real. Junior devs especially at risk.
AI Won't Kill Junior Devs - But Your Hiring Strategy Might
Addy Osmani reframes the junior developer AI debate from risk to opportunity. Key insight: AI accelerates careers for juniors who adapt by shifting from 'write code' to 'supervise AI code'. Teams with updated mentorship create accelerated apprenticeships. The real threat is hiring strategies, not AI itself.
Key Findings:
AI is a career accelerant for juniors who adapt
Skill surface shifts from 'write code' to 'verify and supervise AI code'
Updated mentorship: coach on integrating AI without over-dependence
The 'Trust, But Verify' Pattern For AI-Assisted Engineering
This article provides the conceptual framework for our trust_calibration dimension. The three principles (Blind Trust is Vulnerability, Copilot Not Autopilot, Human Accountability Remains) directly inform our survey questions. The emphasis on verification over speed aligns with METR findings. Practical guidance includes starting conservatively with AI on low-stakes tasks.
Key Findings:
Blind trust in AI-generated code is a vulnerability
AI tools function as 'Copilot, Not Autopilot'
Human verification is the new development bottleneck
If you mentor junior developers: Do you guide them on appropriate AI use?
[0]N/A - I don't mentor juniors
[0]No - they figure it out themselves
[2]Informally - occasional tips
[3]Yes - I discuss when to use and not use AI
[5]Yes - including AI-free exercises to build fundamentals
Note: Junior developer skill atrophy is a major 2025 concern. Mentors need to actively prevent over-reliance.
Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence
Stanford Digital Economy Lab / Stanford HAI
This is the most credible source on AI's employment impact on junior developers. The 13% relative decline for ages 22-25 in AI-exposed roles is significant but more nuanced than previously cited '25% decrease'. Key insight: the impact is concentrated where AI automates rather than augments - this supports our team_composition dimension's focus on mentorship and skill development. Updated November 2025.
Key Findings:
Early-career workers (ages 22-25) in AI-exposed occupations experienced 13% relative decline in employment
Adjustments occur primarily through employment rather than compensation
Employment declines concentrated in occupations where AI automates rather than augments labor