Text Generation by Learning from Off-Policy DemonstrationsDownload PDF

28 Sep 2020 (modified: 14 Jan 2021)ICLR 2021 PosterReaders: Everyone
  • Keywords: text generation, learning from demonstrations
  • Abstract: Current approaches to text generation largely rely on autoregressive models and maximum likelihood estimation. This paradigm leads to (i) diverse but low-quality samples due to mismatched learning objective and evaluation metric (likelihood vs. quality) and (ii) exposure bias due to mismatched history distributions (gold vs. model-generated). To alleviate these problems, we frame text generation as a reinforcement learning (RL) problem with expert demonstrations (i.e., the training data), where the goal is to maximize quality given model-generated histories. Prior RL approaches to generation often face optimization issues due to the large action space and sparse reward. We propose GOLD (generation by off-policy learning from demonstrations): an easy-to-optimize algorithm that learns from the off-policy demonstrations by importance weighting. According to both automatic and human evaluation, models trained by GOLD outperforms those trained by MLE and policy gradient on summarization, question generation, and machine translation. Further, they are less sensitive to decoding algorithms and alleviate exposure bias.
  • Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
18 Replies