EmbSum: Leveraging the Summarization Capabilities of Large Language Models for Content-Based Recommendations
Abstract: Content-based recommendation systems play a crucial role in delivering personalized content to users in the digital world. In this
work, we introduce EmbSum, a novel framework that enables offline pre-computations of users and candidate items while capturing
the interactions within the user engagement history. By utilizing
the pretrained encoder-decoder model and poly-attention layers,
EmbSum derives User Poly-Embedding (UPE) and Content PolyEmbedding (CPE) to calculate relevance scores between users and
candidate items. EmbSum actively learns the long user engagement
histories by generating user-interest summary with supervision
from large language model (LLM). The effectiveness of EmbSum
is validated on two datasets from different domains, surpassing
state-of-the-art (SoTA) methods with higher accuracy and fewer parameters. Additionally, the model’s ability to generate summaries
of user interests serves as a valuable by-product, enhancing its
usefulness for personalized content recommendations.
Loading