PNAT: Non-autoregressive Transformer by Position Learning

Anonymous

Sep 25, 2019 ICLR 2020 Conference Blind Submission readers: everyone Show Bibtex
  • Abstract: Non-autoregressive generation is a new paradigm for text generation. Previous work hardly considers to explicitly model the positions of generated words. However, position modeling of output words is an essential problem in non-autoregressive text generation. In this paper, we propose PNAT, which explicitly models positions of output words as latent variables in text generation. The proposed PNATis simple yet effective. Experimental results show that PNATgives very promising results in machine translation and paraphrase generation tasks, outperforming many strong baselines.
  • Keywords: Text Generation
0 Replies

Loading