Abstract: Leveraging auxiliary textual data can help with user profiling and item characterization in recommender systems (RSs). However, incomplete item descriptions and the subjectivity of user-uploaded content limit the potential of textual information in RSs. Although large language models (LLMs) emerge as promising tools for description enhancement, LLMs may suffer from hallucinations without fully exploring user-item collaborative information. To this end, we propose a Graph-aware Convolutional LLM method, which captures fine-grained collaborative information behind high-order relations in the user-item graph. To bridge the gap between graph structures and LLMs, we employ the LLM as an aggregator for graph convolution process, eliciting it to infer the graph-based knowledge iteratively. To mitigate the information overload associated with large-scale graphs, we segment the graph processing into manageable steps, progressively incorporating multi-hop information in a least-to-most manner. Experiments on three real-world datasets demonstrate that our method consistently outperforms state-of-the-art approaches.
Paper Type: Long
Research Area: Language Modeling
Research Area Keywords: large language model, recommender systems, graph convolutional networks, applications
Contribution Types: NLP engineering experiment
Languages Studied: English, Chinese
Submission Number: 478
Loading