Tuning-Free LLM Can Build A Strong Recommender Under Sparse Connectivity And Knowledge Gap Via Extracting Intent

Published: 23 Oct 2025, Last Modified: 08 Nov 2025LOG 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Recommendation system, Large Language Model (LLM), Knowledge Graph, Retrieval Augmented Generation (RAG)
TL;DR: This paper developed a new recommendation approach that uses LLMs to build and enrich a knowledge graph, solving the cold-start problem with less data and computational power.
Abstract: Recent advances in recommendation with large language models (LLMs) often rely on either commonsense augmentation at the item-category level or implicit intent modeling on existing knowledge graphs. However, such approaches struggle to capture grounded user intents and to handle sparsity and cold-start scenarios. In this work, we present LLM-based Intent Knowledge Graph Recommender (IKGR), a novel framework that constructs an intent-centric knowledge graph where both users and items are explicitly linked to intent nodes extracted by a tuning-free, RAG-guided LLM pipeline. By grounding intents in external knowledge sources and user profiles, IKGR canonically represents what a user seeks and what an item satisfies as first-class entities. To alleviate sparsity, we further introduce a mutual-intent connectivity densification strategy, which shortens semantic paths between users and long-tail items without requiring cross-graph fusion. Finally, a lightweight GNN layer is employed on top of the intent-enhanced graph to produce recommendation signals with low latency. Extensive experiments on public and enterprise datasets demonstrate that IKGR consistently outperforms strong baselines, particularly on cold-start and long-tail slices, while remaining efficient through a fully offline LLM pipeline.
Submission Type: Full paper proceedings track submission (max 9 main pages).
Software: https://github.com/CapitalOne-Research/IKGR
Poster: png
Submission Number: 60
Loading