Hallucination refers to when an LLM generates information that is factually incorrect, made up, or not grounded in reality—but presents it with confidence. The model isn't lying; it's generating plausible-sounding text based on patterns, without access to ground truth.
Hallucinations are a fundamental limitation of LLMs and why verification of AI output is crucial.