UniRec: Unified Multimodal Encoding for LLM-Based Recommendations

18 Feb 2026 (modified: 30 Apr 2026)Decision pending for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large language models (LLMs) have recently shown promise for multimodal recommen- dation, particularly with text and image inputs. Yet real-world recommendation signals extend far beyond these modalities. To reflect this, we formalize recommendation features into four modalities: text, images, categorical features, and numerical attributes, and em- phasize unique challenges this heterogeneity poses for LLMs in understanding multimodal information. In particular, these challenges arise not only across modalities but also within them, as attributes (e.g., price, rating, time) may all be numeric yet carry distinct meanings. Beyond this intra-modality ambiguity, another major challenge is the nested structure of recommendation signals, where user histories are sequences of items, each carrying multiple attributes. To address these challenges, we propose UniRec, a unified multimodal encoder for LLM-based recommendation. UniRec first employs modality-specific encoders to produce consistent embeddings across heterogeneous signals. It then applies a triplet representa- tion—comprising attribute name, type, and value—to separate schema from raw inputs and preserve semantic distinctions. Finally, a hierarchical Q-Former models the nested structure of user interactions while maintaining their layered organization. On multiple real-world benchmarks, UniRec outperforms state-of-the-art multimodal and LLM-based recommenders by up to 15%, while extensive ablation studies further validate the contribu- tions of each component.
Submission Type: Regular submission (no more than 12 pages of main content)
Code: https://github.com/ulab-uiuc/UniRec
Assigned Action Editor: ~Jingcai_Guo1
Submission Number: 7566
Loading