Keywords: Generative recommendation, Sequential recommendation, Curriculum learning
Abstract: Generative recommendation, which directly generates item identifiers, has emerged as a promising paradigm for recommendation systems. However, its potential is fundamentally constrained by the reliance on purely autoregressive training. This approach focuses solely on predicting the next item while ignoring the rich long-term dependencies in a user's interaction history and thus failing to grasp the underlying intent.
To address this limitation, we propose Masked History Learning (MHL), a novel training framework that shifts the objective from simple next-step prediction to deep comprehension of history. MHL augments the autoregressive objective with an auxiliary task of reconstructing masked items, compelling the model to understand why an item path is formed from the user's past behaviors, rather than just ``what'' item comes next.
We introduce two key contributions to enhance this framework: (1) an entropy-guided masking policy that intelligently targets the most informative historical items for reconstruction, and (2) a curriculum learning scheduler that bridges the gap between bidirectional training and autoregressive inference.
Experimental results on three public datasets show that our method significantly outperforms state-of-the-art generative models, highlighting that a comprehensive understanding of the past is crucial for accurately predicting users' future path. The code will be released to the public.
Primary Area: generative models
Submission Number: 19010
Loading