MAUVE: Measuring the Gap Between Neural Text and Human Text using Divergence FrontiersDownload PDF

21 May 2021, 20:48 (edited 02 Nov 2021)NeurIPS 2021 OralReaders: Everyone
  • Keywords: Open-ended generation, neural text generation, evaluation, divergence frontier, human judgement
  • TL;DR: A comparison measure for open-ended text generation by directly comparing the distribution of neural machine-generated text to that of human-written text.
  • Abstract: As major progress is made in open-ended text generation, measuring how close machine-generated text is to human language remains a critical open problem. We introduce Mauve, a comparison measure for open-ended text generation, which directly compares the learnt distribution from a text generation model to the distribution of human-written text using divergence frontiers. Mauve scales up to modern text generation models by computing information divergences in a quantized embedding space. Through an extensive empirical study on three open-ended generation tasks, we find that Mauve identifies known properties of generated text, scales naturally with model size, and correlates with human judgments, with fewer restrictions than existing distributional evaluation metrics.
  • Supplementary Material: pdf
  • Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
  • Code: https://github.com/krishnap25/mauve-experiments
12 Replies

Loading