Catching a Hallucination: Trust but Verify
beginner 8 min 📋 Copy-paste ready
Sources not yet verified
Scenario
Context: You're using AI to help research an unfamiliar API or library
Goal: Learn to recognize and verify AI claims that might be hallucinated
Anti-pattern: Copying code or information from AI without verification
Tools: ChatGPTClaudeGitHub Copilot
Key Takeaways
- AI confidently generates plausible-sounding incorrect information
- Specific API names, parameters, and URLs are high-risk for hallucination
- Ask for documentation links and actually verify them
- Quick REPL tests can catch errors before they enter your codebase
- When in doubt, ask the AI to verify its own claims
Try It Yourself
Prompt Template
I want to use [LIBRARY/API] to [GOAL]. Can you show me how?
Also, please include:
1. A link to the relevant official documentation
2. The minimum version required for this feature
3. Any platform-specific limitations Variations to Try
- Before using: 'Let me verify - can you confirm this method exists in the official docs?'
- After getting code: 'What would happen if I called this with [edge case]? Let me check the docs.'
- If something seems off: 'I couldn't find [method] in the docs. Are you sure that's the correct name?'
Sources
Tempered AI — Forged Through Practice, Not Hype
? Keyboard shortcuts