Leveraging Large Models for Evaluating Novel Content: A Case Study on Advertisement Creativity

ACL ARR 2024 June Submission2707 Authors

15 Jun 2024 (modified: 19 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Evaluating creativity is a challenging task, even for humans, not only because it is a subjective call, but also because it involves complex cognitive processes such as decomposition and drawing unlikely connections. Inspired by previous work in marketing, we attempt to break down creativity into atypicality and originality and collect fine-grained human annotation on these categories. With controlled experiments with vision language models (VLM), we evaluate the alignment between models and humans by a suite of novel tasks. Our results show decent alignments between humans and models, pointing to the promising direction for future work in automatic creativity evaluation.
Paper Type: Short
Research Area: Resources and Evaluation
Research Area Keywords: creativity, subjective task, automatic evaluation
Contribution Types: Model analysis & interpretability, Data resources
Languages Studied: English
Submission Number: 2707
Loading