Keywords: Curiosity-driven learning; Creativity Evaluation; Personalisation;
TL;DR: This paper introduces a novel intrinsic curiosity reward signal for creativity evaluation using LLMs
Abstract: Modern large language models (LLMs) excel at objective tasks such as evaluating mathematical reasoning and factual accuracy, yet they falter when faced with the nuanced, subjective nature of assessing creativity. In this work, we propose a novel curiosity-driven LLM-as-a-judge for evaluating creative writing which is personlized to each individual's creative judgments. We use the Torrance Test of Creative Thinking(TTCW) benchmark introduced in \cite{chakrabarty2024artartificelargelanguage}, which has stories annotated by expert humans across various subjective dimensions like \emph{Originality}, to test our hypothesis. We show that our method enables models across various sizes, to learn the nuanced creative judgments of different individuals, by showing improvements over baseline supervised finetuning(SFT) method across various evaluation metrics like Pearson correlation, Cohen's $\kappa$ and F1 values. Our method is especially useful in subjective evaluations where not all the annotators agree with each other.
[1]Chakrabarty, Tuhin, et al. "Art or artifice? large language models and the false promise of creativity." Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems. 2024.
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 20580
Loading