SkipGR: Accelerating Generative Recommendations with Efficient Skipping

08 Sept 2025 (modified: 01 Oct 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Generative Recommendations; dataset pruning
Abstract: Generative Recommendation Models (GRMs) frame recommendation tasks as autoregressive sequence modeling, achieving strong accuracy. However, training on all exposure tokens or sequences can be inefficient, as user logs often contain a large proportion of redundant or uninformative interactions that inflate token budgets without significant learning gains. To address this inefficiency, we propose the concept of semantic token entropy, which maps an item's multimodal content into a compact semantic vocabulary, enabling tractable entropy estimation. We trained a GR-4B model from scratch on an industrial-scale recommendation corpus and analyzed the token entropy. The analysis reveals that high-entropy tokens and sequences lead to larger gradient updates. Leveraging these insights, we developed a skipping policy based on semantic entropy, named SkipGR, that selectively and adaptively bypasses uninformative tokens and sequences. This approach accelerates convergence and reduces computational costs, resulting in even superior outcomes compared to using the full corpus. Extensive empirical evaluations on both public datasets and large-scale industrial datasets have validated the enhanced performance.
Primary Area: generative models
Submission Number: 3069
Loading