Evaluating Style-Personalized Text Generation: Challenges and Directions

ACL ARR 2025 July Submission394 Authors

27 Jul 2025 (modified: 20 Aug 2025)ACL ARR 2025 July SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: While prior research has built tools and benchmarks towards style personalized text generation, there has been limited exploration of evaluation in low-resource author style personalized text generation space. Through this work, we question the effectiveness of the widely adopted evaluation metrics like BLEU and ROUGE, and explore other evaluation paradigms such as style embeddings and LLM-as-judge to holistically evaluate the style personalized text generation task. We evaluate these metrics and their ensembles using our style discrimination benchmark, that spans eight writing tasks, and evaluates across three settings, domain discrimination, authorship attribution, and LLM personalized vs non-personalized discrimination. We provide conclusive evidence to adopt ensemble of diverse evaluation metrics to effectively evaluate style personalized text generation.
Paper Type: Short
Research Area: Resources and Evaluation
Research Area Keywords: style analysis, automatic evaluation, benchmarking
Contribution Types: Approaches to low-resource settings, Data resources, Data analysis
Languages Studied: English
Submission Number: 394
Loading