Does the Generator Mind its Contexts? An Analysis of Generative Model Faithfulness under Knowledge TransferDownload PDF

Anonymous

16 Oct 2022 (modified: 05 May 2023)ACL ARR 2022 October Blind SubmissionReaders: Everyone
Keywords: Question Answering, Generative Model, Faithfulness
Abstract: The knowledge-augmented generator should generate information grounded on input contextual knowledge despite how the context changes. Many previous works focus on hallucination analysis from static input (e.g., in summarization or machine translation). In this work, we probe faithfulness in generative question answering with dynamic knowledge. We explore whether hallucination from parametric memory exists when contextual knowledge changes and analyze why it happens. For efficiency, we propose a simple and effective measure for such hallucinations. Surprisingly, our investigation reveals that all models only hallucinate previous answers in rare cases. To further analyze the causality of this issue, we conduct experiments and verify that context is a critical factor in hallucination during training and testing from several perspectives.
Paper Type: short
Research Area: Question Answering
0 Replies

Loading