Abstract: Prioritizing long-term engagement rather than immediate benefits has garnered increasing attention in recent years. However, current research on long-term recommendation faces substantial challenges in terms of model evaluation and design: 1) Traditional evaluation approaches suffer from limitations due to the sparsity and bias in the offline data and fail to capture user psychological influences. 2) Existing recommenders based on Reinforcement Learning (RL) are entirely data-driven and constrained by sparse and long-tail distributed offline data. Fortunately, recent advancements in Large Foundation Models (LFMs), characterized by remarkable simulation and planning capacity, offer significant opportunities for long-term recommendation. Despite potential, due to the substantial scenario divergence between LFM pre-training and recommendation, employing LFMs in long-term recommendation still faces certain challenges. To this end, this research focuses on adapting the remarkable capabilities of LFMs to long-term recommendations to devise reliable evaluation schemes and efficient recommenders.
Loading