Abstract: While Large Language Models (LLM) leverage rich external knowledge to generate user representations in recommendation systems (RS), their computational complexity poses challenges, including initial costs when facing enormous users, and incremental costs resulting from dynamic user preferences. To address these issues, we propose SEAL, an efficient SEquence-based Approximation framework for LLM-enhanced recommendation. SEAL approximates semantic user representations from LLM with collaborative user representations from a lightweight sequence-based module. SEAL efficiently reduces LLM’s initial cost through adequately collaborative approximation to semantic representations and further minimizes incremental cost by generating lightweight approximate representations, eliminating direct LLM inference after well training. Experimental results on public datasets show that SEAL achieves a 60% reduction in LLM inference cost while maintaining recommendation performance. Furthermore, SEAL enables real-time modeling of user preferences without direct LLM involvement, making it cost-effective for recommendation systems.
External IDs:dblp:conf/icic/ChaiZLXZY25
Loading