Counterfactual Decoding for Anti-Hallucination Knowledge-grounded Dialogue GenerationDownload PDF

Anonymous

05 Jun 2022 (modified: 05 May 2023)ACL ARR 2022 June Blind SubmissionReaders: Everyone
Keywords: Knowledge-grounded Dialogue, Hallucination in NLG, Causal Inference
Abstract: The task of Knowledge-grounded Dialogue (KGD) generation, which intentionally invokes external knowledge resources to produce natural and informative responses, has been a popular topic these years. Empowered by the large-scale pretrained language models, existing methods have demonstrated impressive performance on this task. However, the hallucination problem remains a serious problem, causing unpredictable factual errors in the generated results. Although serious efforts try to alleviate this phenomenon by data pre-processing or fact-checking, these methods still heavily rely on assistance from external tools or resources. Inspired by counterfactual reasoning, we propose a lightweight and independent anti-hallucination mechanism in KGD by conducting a causal effect analysis. Our example implementation's benchmark and human evaluation results show that our method can significantly reduce hallucination without disrupting the model performance. We hope our efforts can call for more attention to utilizing causal inference to solve relevant issues.
Paper Type: long
0 Replies

Loading