Decoding in Latent Spaces for Efficient Inference in LLM-based Recommendation

ACL ARR 2025 May Submission5396 Authors

20 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Fine-tuning large language models (LLMs) for recommendation in a generative manner has delivered promising results, but encounters significant inference overhead due to autoregressive decoding in the language space. This work explores bypassing language-space decoding by directly matching candidate items with the LLM's internal thought representations in the latent space, eliminating the time-consuming autoregressive process to reduce computational costs. Towards this, we introduce \textit{Light Latent-space Decoding} ($L2D$), an effective and efficient latent-space decoding method. $L2D$ represents user-preferred items by using the hidden states of test sequences reflecting the LLM’s internal thought, and obtains candidate item representations from the hidden states of training sequences labeled with the corresponding candidate items. It then matches the two types of representations to decode items, achieving latent-space decoding. In this way, it enables efficient decoding without altering the LLM's generative tuning paradigm, thereby preserving performance. Extensive empirical results demonstrate that $L2D$ is more than 10x faster than language-space decoding while maintaining or enhancing performance.
Paper Type: Long
Research Area: Information Retrieval and Text Mining
Research Area Keywords: Generation,Efficient/Low-Resource Methods for NLP, Information Retrieval and Text Mining
Contribution Types: Approaches low compute settings-efficiency
Languages Studied: English
Submission Number: 5396
Loading