LLM-STAR: Sequence-Teacher-Anchored LLM Recommender with Adaptive Regularization

18 Sept 2025 (modified: 14 Nov 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: LLM; recommendation system;
Abstract: Large Language Models (LLMs) have been increasingly adopted for recommendation tasks, yet their ability to leverage the sequential nature of user item interaction data remains underexplored. In this work, we conduct a comprehensive investigation into how LLMs process item sequences and uncover a critical limitation: LLMs often exhibit a set-like prediction behavior, focusing on the unordered collection of items rather than their order. Through experiments where item textual content is removed and only item IDs are provided, we demonstrate that LLMs fail to fully exploit sequential dependencies, leading to degraded sequential recommendation. Motivated by the principle of entropy, we further provide a representation-space perspective: the region occupied by embeddings of ordered item sequences is a compact subspace of that formed by unordered item collections, as sequence information reduces entropy and enforces tighter structure. Building on this insight, we introduce a contrastive learning framework that explicitly guides LLMs to capture sequential patterns by encouraging compact representation of ordered item sequences. Extensive experiments across multiple benchmarks show that our method achieves state-of-the-art performance, surpassing prior LLM-based recommendation approaches.
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 10331
Loading