On the Origin of Hallucinations in Conversational Models: Is it the Datasets or the Models?Download PDF

Anonymous

16 Nov 2021 (modified: 05 May 2023)ACL ARR 2021 November Blind SubmissionReaders: Everyone
Abstract: Although knowledge-grounded conversational models are able to generate fluent responses that are indistinguishable from human-generated ones, they are known to suffer from producing factually invalid statements, a phenomenon commonly called hallucination. In this work, we investigate the underlying causes of this phenomenon: is hallucination due to the training data, or to the models? We conduct a comprehensive human study on both existing knowledge-grounded dialogue datasets and several state-of-the-art models. Our study reveals that the standard benchmarks consist of more than 60% hallucinated responses, leading to models that not only hallucinate but even amplify hallucinations. Moreover, we qualitatively analyze the nature of hallucinations, and identify key response strategies used by humans and models that lead to hallucinations. We hope these insights will show the way forward towards building hallucination-free conversational models.
0 Replies

Loading