Fact
Intermediate
AI Hallucinations
February 21, 2025
LLMs can sometimes 'hallucinate,' meaning they generate plausible-sounding but incorrect or nonsensical information. Always verify critical information from LLM outputs.
Category: AI Limitations
Difficulty: Intermediate