Improved Beam Search for Hallucination Mitigation in Abstractive SummarizationDownload PDF

Anonymous

16 Dec 2023ACL ARR 2023 December Blind SubmissionReaders: Everyone
TL;DR: A new NLI-based beam re-scorer to mitigate hallucination for abstractive summarization during inference time
Abstract: Advancement in large pretrained language models has significantly improved their performance for conditional language generation tasks including summarization albeit with hallucinations. With the rise in the commercial use of text-generative applications, it has become necessary to have a component that ensures the factuality of the responses. To reduce hallucinations, conventional methods proposed improving beam search or using a fact checker as a postprocessing step. In this paper, we investigate using the Natural Language Inference (NLI) entailment metric to detect and prevent hallucinations in summary generation. We propose an inference time and easily generalizable NLI-assisted beam re-ranking mechanism by computing entailment probability scores between the input context and summarization model-generated beams during saliency-enhanced greedy decoding. We also investigate the limitations of existing academic factuality benchmarks and demonstrate that our proposed algorithm consistently outperforms the baselines in human evaluation on publicly available XSum and CNN/DM datasets.
Paper Type: long
Research Area: Summarization
Contribution Types: Model analysis & interpretability
Languages Studied: English
0 Replies

Loading