Keywords: Personalization, Summarization, Personalized Summarization, User Preference Modeling
TL;DR: PerDucer is a temporal knowledge graph based booster model for summarization models to improve their personalization capabilities using the user's dynamic reading histories.
Abstract: Document summarization is useful for quick selection and consumption of $\textit{highly subjective}$ content of interest. Identifying $\textit{salient}$ information in a given document, especially one covering multiple aspects, is non-trivial, which further calls for personalized summarization. Modern Large Language Models (LLMs) have shown promising results for in-context learning-based summarization. However, earlier works have demonstrated their incapability to handle dynamically evolving user-preference histories (in contrast to conventional modeling of static personas). To address this, we propose PerDucer, a $\textit{summarizer model agnostic personalization booster}$ that predicts the user's next interaction and thereby generates personalized key-phrases from a given query document. These key-phrases serve as lightweight cues that guide $\textit{frozen}$ summarization models, both small and large. Experiments on the PENS and OpenAI-Reddit datasets reveal that four PerDucer-boosted SOTA LLMs outperform their best-performing history-prompt baselines with an average gain of 0.47 $\uparrow$ across PSE variants. Two boosted SLMs achieve comparable gains with the best (SmolLM2-1.7B) 98.6% of DeepSeek-14B (best LLM) performance.
Supplementary Material: zip
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 14455
Loading