Evaluating Diversity in Automatic Poetry Generation

ACL ARR 2024 June Submission3322 Authors

16 Jun 2024 (modified: 06 Aug 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Natural Language Generation (NLG), and more generally generative AI, are among the currently most impactful research fields. Creative NLG, such as automatic poetry generation, is a fascinating niche in this area. While most previous research has focused on forms of the Turing test when evaluating automatic poetry generation --- can humans distinguish between automatic and human generated poetry --- we evaluate the diversity of automatically generated poetry, by comparing distributions of generated poetry to distributions of human poetry along structural, lexical, semantic and stylistic dimensions, assessing different model types (word vs. character-level, general purpose LLMs vs. poetry-specific models), including the very recent LLaMA3, and types of fine-tuning (conditioned vs. unconditioned). We find that current automatic poetry systems are considerably underdiverse along multiple dimensions --- they often do not rhyme sufficiently, are semantically too uniform and even do not match the length distribution of human poetry. Among all models explored, character-level style-conditioned models perform slightly better. Our identified limitations may serve as the basis for more genuinely creative future poetry generation models.
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: automatic creation and evaluation of language resources,automatic evaluation of datasets,evaluation methodologies,evaluation
Contribution Types: Model analysis & interpretability, Data analysis
Languages Studied: German,English
Submission Number: 3322
Loading