PARENTing via Model-Agnostic Reinforcement Learning to Correct Pathological Behaviors in Data-to-Text GenerationDownload PDFOpen Website

2020 (modified: 24 Apr 2023)INLG 2020Readers: Everyone
Abstract: In language generation models conditioned by structured data, the classical training via maximum likelihood almost always leads models to pick up on dataset divergence (i.e., hallucinations or omissions), and to incorporate them erroneously in their own generations at inference. In this work, we build on top of previous Reinforcement Learning based approaches and show that a model-agnostic framework relying on the recently introduced PARENT metric is efficient at reducing both hallucinations and omissions. Evaluations on the widely used WikiBIO and WebNLG benchmarks demonstrate the effectiveness of this framework compared to state-of-the-art models.
0 Replies

Loading