Eval all, trust a few, do wrong to none: Comparing sentence generation modelsDownload PDFOpen Website

2018 (modified: 21 Sept 2022)CoRR 2018Readers: Everyone
Abstract: In this paper, we study recent neural generative models for text generation related to variational autoencoders. Previous works have employed various techniques to control the prior distribution of the latent codes in these models, which is important for sampling performance, but little attention has been paid to reconstruction error. In our study, we follow a rigorous evaluation protocol using a large set of previously used and novel automatic and human evaluation metrics, applied to both generated samples and reconstructions. We hope that it will become the new evaluation standard when comparing neural generative models for text.
0 Replies

Loading