Hallucinations are a fundamental challenge in LLMs - the model confidently produces false information because it generates based on patterns rather than verified facts. Types include: factual errors (wrong dates, names), fabricated citations (non-existent papers or URLs), logical inconsistencies, and confident nonsense. Causes include training data gaps, model limitations in distinguishing fact from fiction, and pressure to always provide an answer. Mitigation strategies include: RAG (grounding in retrieved documents), chain-of-thought prompting, asking models to cite sources, fine-tuning on factual data, and human-in-the-loop verification. Understanding hallucinations is crucial for safe AI deployment.
🧠 AI & LLMs beginner
Hallucination
When AI models generate plausible-sounding but factually incorrect or fabricated information.
4
views
</> Related Terms
RAG (Retrieval-Augmented Generation)
AI technique combining vector search with LLMs to provide contextual answers from custom knowledge bases.
LLM (Large Language Model)
AI models trained on massive text datasets to understand and generate human-like text.
Prompt Engineering
The practice of designing and optimizing inputs to AI models to achieve desired outputs.