Reliability, Safety & Security / Hallucination & Grounding

Hallucination

Beginner [2/5]
Fabrication False generation

Definition

Hallucination refers to when an LLM generates information that is factually incorrect, made up, or not grounded in reality—but presents it with confidence. The model isn't lying; it's generating plausible-sounding text based on patterns, without access to ground truth.

Hallucinations are a fundamental limitation of LLMs and why verification of AI output is crucial.

Key Concepts

  • Confident errors: Model states falsehoods with certainty
  • Plausibility: Hallucinations often sound completely reasonable
  • Pattern matching: Model generates based on patterns, not verified facts
  • Verification needed: Always check important facts independently

Examples

Common Hallucination
Fabricated Academic Citation
User: "Give me a citation for research on sleep" LLM Response: "According to Smith, J. et al. (2019), published in the Journal of Sleep Research, 'Sleep deprivation significantly affects cognitive function in adults aged 25-45...'" ⚠️ PROBLEM: This paper, author, and quote may be completely fabricated!
LLMs frequently invent plausible-sounding but fake academic citations, statistics, and quotes.
Types of Hallucinations
What Models Get Wrong
FACTUAL HALLUCINATIONS • Wrong dates, numbers, statistics • Non-existent people or places • Fabricated events CITATION HALLUCINATIONS • Fake papers and authors • Wrong journal names • Made-up quotes LOGICAL HALLUCINATIONS • Confident but flawed reasoning • Contradicting earlier statements • Impossible conclusions
Hallucinations can occur in many forms across different types of content.
Mitigation Strategies
How to Reduce Hallucinations
1. Use RAG (Retrieval-Augmented Generation) → Ground responses in real documents 2. Lower temperature → More deterministic, less creative 3. Ask for sources → "Cite your sources" (then verify them!) 4. Use chain-of-thought → Explicit reasoning is easier to verify 5. Implement fact-checking → Separate verification step
Multiple strategies can help reduce but not eliminate hallucinations.

Interactive Exercise

🔍
Spot the Hallucination

Review this LLM response about a company and identify what might be hallucinated:

"TechCorp was founded in 2015 by Sarah Chen and raised $50M in Series A funding led by Sequoia Capital. Their flagship product, DataSync Pro, has over 10,000 enterprise customers including Microsoft and Google."

What would you verify before trusting this information?

Pro Tips
  • Always verify facts, especially citations and statistics
  • Ask for sources and check if they actually exist
  • Use RAG to ground responses in real documents
  • Lower temperature can reduce (but not eliminate) hallucinations

Related Terms