Abstract: Recommender systems utilize data analysis and predictive algorithms to suggest relevant items to users, enhancing their experiences and engagements across various digital platforms, particularly in e-commerce. To obtain satisfactory representations of items and user preferences, many existing studies (multi-modal recommendation approaches) integrate diverse data (e.g., text and images) into the recommendation process to enhance item embeddings. However, the capability of these methods is restricted due to the following problems: (1) insufficient utilization of multi-modal information; (2) lack of deeper and more adequate insights from user-item interactions after multi-modal fusion, as well as the inability to uncover more intricate or hidden knowledge in the users’ modality preference. To address these problems, we propose HUMP, which Highlights Users’ Modality Preference for multi-modal recommender systems, featuring two key components: (1) a users’ modality preference guided data fusion module for integrating users’ modality preference into user and item representations which is more appropriate for recommendation scenarios; (2) a global representation enhancement module, designed to learn the deeper relationships of fused information and enhance the representations through a user-item layered heterogeneous graph. Experiments on real-world datasets demonstrate the superiority of our model over state-of-the-art baselines.
External IDs:dblp:journals/dase/ZhuZCZZ25
Loading