Abstract: The rise of online multi-modal sharing platforms like TikTok and YouTube has enabled personalized recommender systems to incorporate multiple modalities (such as visual, textual, and acoustic) into user representations. However, addressing the challenge of data sparsity in these systems remains a key issue. To address this limitation, recent research has introduced self-supervised learning techniques to enhance recommender systems. However, these methods often rely on simplistic random augmentation or intuitive cross-view information, which can introduce irrelevant noise and fail to accurately align the multi-modal context with user-item interaction modeling. To fill this research gap, we propose a novel multi-modal graph diffusion model for recommendation called DiffMM. Our framework integrates a modality-aware graph diffusion model with a cross-modal contrastive learning paradigm to improve modality-aware user representation learning. This integration facilitates better alignment between multi-modal feature information and collaborative relation modeling. Our approach leverages diffusion models’ generative capabilities to automatically generate a user-item graph that is aware of different modalities, facilitating the incorporation of useful multi-modal knowledge in modeling user-item interactions. We conduct extensive experiments on three public datasets, consistently demonstrating the superiority of our DiffMM over various competitive baselines.
Primary Subject Area: [Engagement] Multimedia Search and Recommendation
Relevance To Conference: With the rapid development of multimedia streaming platforms such as TikTok and YouTube, the incorporation of multi-modal information into recommendation systems has emerged as a promising approach to address data sparsity. In our work, we focus on exploring the integration of multi-modal information of items into the modeling process of user preferences using diffusion models. Compared to existing approaches in multi-modal recommendation systems, our method offers a more intuitive way of incorporating multi-modal information, thereby providing a fresh perspective for the field of multi-modal recommendation.
Supplementary Material: zip
Submission Number: 4318
Loading