Decoding in Latent Spaces for Efficient Inference in LLM-based Recommendation

ACL ARR 2025 February Submission7286 Authors

16 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Fine-tuning large language models (LLMs) for recommendation in a generative manner has delivered promising results, but encounters significant inference overhead due to autoregressive decoding in the language space. This work explores bypassing language-space decoding by directly decoding items in the latent space, eliminating the time-intensive autoregressive process to reduce costs. Moreover, given that the hidden states of input sequences in the latent space have already encapsulated user preference information, latent-space decoding also has the potential to preserve performance. Towards this, we introduce Light Latent-space Decoding (L2D), an effective and efficient latent-space decoding method. L2D uses the hidden states of test sequences to represent user-preferred items, and it derives candidate item representations from the hidden states of training sequences labeled with the corresponding candidate items. It then matches the two types of representations to decode items, achieving latent-space decoding. Empirical results demonstrate that L2D is more than 10x faster than language-space decoding while maintaining or enhancing performance.
Paper Type: Long
Research Area: Information Retrieval and Text Mining
Research Area Keywords: Information Retrieval and Text Mining
Contribution Types: Approaches low compute settings-efficiency
Languages Studied: English
Submission Number: 7286
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview