SR-PFN: Yet Another Sequential Recommendation Paradigm

20 Sept 2025 (modified: 03 Dec 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Sequential Recommendation, Prior-data Fitted Networks, In-context learning, Bayesian Inference, Synthetic Data
TL;DR: This paper introduces SR-PFN, the first PFN for sequential recommendation, pretrained once on a synthetic prior and performing in-context inference, and surpasses seven baselines, while providing up to 6× faster inference than LLM-based models.
Abstract: Sequential recommendation is a popular task in many real-world businesses. On the one hand, conventional sequential recommenders learn collaborative signals and temporal patterns solely from training interactions and do not generalize well to new datasets. On the other hand, to better leverage textual metadata and user reviews, LLM-based recommenders have recently been proposed; however, they often incur high inference costs and may inherit limitations of language models, including limited multilingual generalization, social bias, and a tendency to memorize data rather than to infer. To this end, we present SR-PFN, a sequential recommender that performs single-pass next-item prediction via in-context inference after being pretrained on synthetic data --- our method is the first attempt for sequential recommendation under the regime of Prior-data Fitted Networks (PFNs). Our approach introduces a synthetic prior model tailored toward sequential recommendation. After being pre-trained on synthetic data sampled from the prior model, which reflects realistic sequential dynamics, SR-PFN learns to approximate the posterior predictive distribution (PPD) for next-item prediction at test time, enabling parameter update-free, single-pass inference. Across sequential recommendation benchmarks, SR-PFN outperforms seven competitive baselines, while offering substantially lower inference costs compared to those of LLM-based models.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 23293
Loading