Hallucination (LLMs)

When a model produces confident but incorrect or fabricated information. Mitigate with retrieval grounding, validation, provenance, and human review for high‑stakes tasks.