🧠 AI & LLMs beginner

Hallucination

When AI models generate plausible-sounding but factually incorrect or fabricated information.

Hallucinations are a fundamental challenge in LLMs - the model confidently produces false information because it generates based on patterns rather than verified facts. Types include: factual errors (wrong dates, names), fabricated citations (non-existent papers or URLs), logical inconsistencies, and confident nonsense. Causes include training data gaps, model limitations in distinguishing fact from fiction, and pressure to always provide an answer. Mitigation strategies include: RAG (grounding in retrieved documents), chain-of-thought prompting, asking models to cite sources, fine-tuning on factual data, and human-in-the-loop verification. Understanding hallucinations is crucial for safe AI deployment.