Keywords: large language models, recommender systems, semantic retrieval, heuristic ranking
TL;DR: We show that LLMs, enhanced with semantics and collaborative priors via smart prompting, can outperform fine-tuned recommenders without extra training.
Abstract: Large Language Models (LLMs) perform well on ranking small candidate sets but, without task-specific training, remain inferior to well-trained conventional recommender models (CRMs) and fine-tuned LLMs.
We propose $\textbf{SCSRec}$ ($\textbf{S}$emantic and $\textbf{C}$ollaborative $\textbf{S}$ignal-enhanced $\textbf{Rec}$ommendation), a framework that closes this gap via:
(1) an off-the-shelf LLM generating rich textual representations from item features and user behaviors;
(2) a multi-view semantic retriever (user–user, item–item, user–item) producing diverse candidate pools; and
(3) a heuristic ranking prompt that incorporates CRM predictions, enabling the LLM to combine collaborative priors with semantic reasoning.
Experiments on three public benchmarks and an industrial dataset show that SCSRec consistently outperforms both prompting baselines and fine-tuned LLM recommenders, without additional training.
These results demonstrate that prompt engineering, enhanced with semantic and collaborative signals, offers a competitive and cost-effective alternative to model fine-tuning for recommendation tasks.
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 1299
Loading