Keywords: Personalized Text Generation, Cold-start Problem, Graph-based Reasoning, Large Language Model
Abstract: Large Language Model (LLM) personalization holds great promise for tailoring responses by leveraging personal context and history. However, real-world users usually possess sparse interaction histories with limited personal context, such as cold-start users in social platforms and newly registered customers in online E-commerce platforms, compromising the LLM-based personalized generation. To address this challenge, we introduce **GraSPeR** (**Gra**ph-based **S**parse **Pe**rsonalized **R**easoning), a novel framework for enhancing personalized text generation under sparse context. GraSPeR first augments user context by predicting items that the user would likely interact with in the future. With reasoning alignment, it then generates texts for these interactions to enrich the augmented context. In the end, it generates personalized outputs conditioned on both the real and synthetic histories, ensuring alignment with user style and preferences. Extensive experiments on three benchmark personalized generation datasets show that \method achieves significant performance gain, substantially improving personalization in sparse user context settings.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 20347
Loading