Adaptive and Representative Multi-Interest Modeling for Recommendation with Large Language Model

ACL ARR 2026 January Submission1991 Authors

01 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Recommender Systems, Multi-interest Modeling, Large Language Model
Abstract: Large language models (LLMs) show potential for multi-interest analysis of users in recommender systems, going beyond heuristic assumptions in existing methods, e.g., co-occurring items indicate the same interest. Despite the effectiveness, two key challenges remain. First, the granularity of raw generation of LLMs for multi-interests is agnostic, possibly leading to overly fine or coarse interest grouping. Second, adopting LLM to analyze individual user behaviors lacks a global perspective on how items relate across users. In this paper, we propose an LLM-driven adaptive and representative multi-interest modeling framework to address these challenges. At the user-individual level, we exploit LLM analysis and alleviate the agnostic granularity by adaptively aggregating semantic clusters to collaborative multi-interests. At the user-crowd level, to mitigate the limited insights in individual behaviors, we formulate a max covering problem to expand the scope of LLM analysis with compactness and representativeness, disentangling interest representations from global perspectives. Experiments on real-world datasets show that our approach outperforms various baselines.
Paper Type: Long
Research Area: Information Extraction and Retrieval
Research Area Keywords: re-ranking, applications
Languages Studied: English
Submission Number: 1991
Loading