PerSEval: Assessing Personalization in Text Summarizers

Published: 31 Oct 2024, Last Modified: 31 Oct 2024Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Personalized summarization models cater to individuals' subjective understanding of saliency, as represented by their reading history and current topics of attention. Existing personalized text summarizers are primarily evaluated based on accuracy measures such as BLEU, ROUGE, and METEOR. However, a recent study argued that accuracy measures are inadequate for evaluating the $\textit{degree of personalization}$ of these models and proposed EGISES, the first metric to evaluate personalized text summaries. It was suggested that accuracy is a separate aspect and should be evaluated standalone. In this paper, we challenge the necessity of an accuracy leaderboard, suggesting that relying on accuracy-based aggregated results might lead to misleading conclusions. To support this, we delve deeper into EGISES, demonstrating both theoretically and empirically that it measures the $\textit{degree of responsiveness}$, a necessary but not sufficient condition for degree-of-personalization. We subsequently propose PerSEval, a novel measure that satisfies the required sufficiency condition. Based on the benchmarking of ten SOTA summarization models on the PENS dataset, we empirically establish that -- (i) PerSEval is reliable w.r.t human-judgment correlation (Pearson's $r$ = 0.73; Spearman's $\rho$ = 0.62; Kendall's $\tau$ = 0.42), (ii) PerSEval has high rank-stability, (iii) PerSEval as a rank-measure is not entailed by EGISES-based ranking, and (iv) PerSEval can be a standalone rank-measure without the need of any aggregated ranking.
Submission Length: Long submission (more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=yqT7eBz1VJ&referrer=%5BAuthor%20Console%5D(%2Fgroup%3Fid%3DTMLR%2FAuthors%23your-submissions)
Changes Since Last Submission: Dear Action Editors and Reviewers, We thank all the reviewers for their time and valuable feedback. We sincerely believe it has helped us a lot in improving the quality of the manuscript. We have added the code link to the camera-ready version. Best, Authors
Code: https://github.com/KDM-LAB/Perseval-TMLR
Supplementary Material: pdf
Assigned Action Editor: ~Manzil_Zaheer1
Submission Number: 2691
Loading