Skip to main content
AI Tool Radar

What is Hallucination?

When an AI model confidently generates factually incorrect or entirely fabricated information.

Full Definition

Hallucination is the phenomenon where a language model produces output that sounds plausible and confident but is factually wrong, internally inconsistent, or entirely made up. Examples include invented citations, fabricated statistics, non-existent people, or incorrect dates. Hallucinations arise because LLMs are trained to predict likely-sounding continuations rather than to retrieve verified facts. Mitigation strategies include Retrieval-Augmented Generation (RAG), which grounds the model in cited source documents; chain-of-thought prompting, which forces explicit reasoning steps; and grounding the model's outputs with tool calls to verified databases. Understanding hallucination is essential for any production AI deployment, particularly in domains like medicine, law, or finance.

Tools that use Hallucination

Related Terms