Evaluating Character Understanding of Large Language Models via Character Profiling From Fictional Works
Abstract: Large language models (LLMs) have demonstrated impressive performance and spurred numerous AI applications, in which role-playing agents (RPAs) are particularly popular, especially for fictional characters.
The prerequisite for these RPAs lies in the capability of LLMs to understand characters from fictional works.
Previous efforts evaluated this capability via basic classification tasks or characteristic imitation, falling short of capturing the nuanced character understanding with LLMs.
In this paper, we propose to evaluate LLMs' character understanding capability via the character profiling task, i.e., summarizing character profiles from corresponding materials, which has been a widely adopted yet understudied practice for RPA development.
Specifically, we construct the CROSS dataset sourced from literature experts, and assess the generated profiles by both comparison with ground truth references and the applicability in downstream tasks.
Our experiments cover various summarization methods and LLMs, and the results validate such capability of LLMs.
We believe our constructed resource and model analysis will promote further research in this field.
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: benchmarking, NLP datasets, automatic evaluation of datasets, evaluation methodologies, evaluation
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Data resources, Data analysis
Languages Studied: English
Submission Number: 465
Loading