Grounding is the practice of anchoring LLM responses in verifiable source material rather than letting the model generate from its training data alone. It ensures outputs are based on specific, citable information rather than potentially hallucinated content.
RAG is the most common grounding technique, but grounding also includes citing sources, using structured data, and verifying claims.