Best AI Tools
Tools
Top 100
AI News
Learn
Compare
Partner
Submit Tool
AI Glossary
/
Hallucination (LLMs)
Hallucination (LLMs)
When a model produces confident but incorrect or fabricated information. Mitigate with retrieval grounding, validation, provenance, and human review for high‑stakes tasks.
Related terms
RAG
Guardrails
Evaluation
View on glossary index