Large Language Model Ranker with Graph Reasoning for Zero-Shot Recommendation

Published: 01 Jan 2024, Last Modified: 07 Feb 2025ICANN (5) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Large Language Models (LLMs), with their powerful reasoning abilities and extensive open-world knowledge, have substantially improved recommender systems by utilizing user interactions to provide personalized suggestions, particularly in zero-shot scenarios where prior training data is absent. However, existing approaches frequently fail to capture complex, higher-order information. In response to this limitation, we integrate user-item bipartite graph information into LLMs. This integration is challenging due to the inherent gaps between graph data and sequential text, as well as the input token limitations of LLMs. We propose a novel Graph Reasoning LLM Ranker framework for Zero-Shot Recommendation (G-LLMRanker) to overcome these challenges. Specifically, G-LLMRanker constructs a semantic tree enriched with higher-order information for each node in the graph and develops an instruction template to generate text sequences that LLMs can comprehend. Additionally, to address the input token limitations of LLMs, G-LLMRanker redefines the recommendation task as a conditional sorting task, where text sequences augmented by graph information serve as conditions, and the items selected through a Mixture of Experts approach act as candidates. Experiments on public datasets demonstrate that G-LLMRanker significantly outperforms zero-shot baselines in recommendation tasks.
Loading