Abstract: Recently, sequential recommendation with generative retrieval has garnered significant attention. However, the training of such generative recommenders typically maximizes the prediction probability of the next item. This approach explicitly considers the accuracy of the recommendation results, but in reality, lacks awareness of other feasible items. Although leveraging large language models (LLMs) that incorporate world knowledge and introducing various auxiliary tasks can mitigate this issue, the high inference costs associated with these LLM-based recommenders make them challenging to deploy in practical scenarios. In this paper, we propose a novel learning framework, LOHRec, which exploits the order and hierarchy in generative recommendations using quantized identifiers to further explore the effectiveness ceiling of lightweight generative recommenders. Comprehensive experiments demonstrate that generative recommenders employing our framework consistently outperform previous state-of-the-art (SOTA) models across different datasets with a comparable number of parameters. Additionally, we empirically show that LOHRec can effectively align lightweight generative recommenders with LLM recommendation preferences in low-resource scenarios.
Our code is available at [https://anonymous.4open.science/r/LOHRec/](https://anonymous.4open.science/r/LOHRec/).
Paper Type: Long
Research Area: Information Retrieval and Text Mining
Research Area Keywords: Sequential Recommendation, Generative Retrieval, LLM Alignment, Contrastive Learning
Contribution Types: Approaches to low-resource settings, Approaches low compute settings-efficiency, Publicly available software and/or pre-trained models
Languages Studied: English
Submission Number: 540
Loading