Hallucination (AI)

A phenomenon where a generative AI model, particularly an LLM, produces outputs that sound plausible and confident but are factually incorrect, nonsensical, or not based on the provided input. Critical evaluation of AI output is necessary due to this.