Keywords: Recommendation; LLM; Bias
Abstract: The integration of large language models (LLMs) into recommender systems has garnered attention due to their ability to leverage rich semantic representations. However, these models suffer from a limitation: \textbf{knowledge bias}---a systematic tendency to over-recommend items dominated by pretraining knowledge, regardless of actual user interests. Although recent studies have focused on fine-tuning LLMs to better align with user interaction data, our investigation reveals that such fine-tuning fails to fundamentally correct this deep-seated bias.
In this work, we present the first in-depth analysis of knowledge bias in LLM-based recommendation. Empirically, we observe that this bias persists across different backbones, even under large-scale fine-tuning. To address this, we propose \textbf{\model}, a novel decoding-time alignment method to mitigate knowledge bias.
Specifically, we quantify LLMs' knowledge towards biased items, then realign output probabilities to match the target data distribution.
For validation, we provide both theoretical justifications and empirical results, including evaluation on two datasets, demonstrating that \model~enhances the quality and diversity of LLM-based sequential recommendation.
Source code\footnote{\url{https://anonymous.4open.science/r/ReKnow-B17E}} is provided to support reproducibility.
Paper Type: Long
Research Area: Ethics, Bias, and Fairness
Research Area Keywords: model bias/fairness evaluation; model bias/unfairness mitigation;
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English
Submission Number: 1045
Loading