Abstract: The realm of recommender systems has experienced a notable upswing in interest, particularly in the context of multi-modality, where user preferences are characterized through the integration of behavioral data and diverse modal information associated with items. However, existing methods grapple with two significant challenges: (1) The inherent noise present in multi-modal features can contaminate item representations. Conventional fusion methods may inadvertently propagate this noise to interaction data through the fusion process. (2) Existing multi-modal recommendation methods typically rely on random data augmentation. These approaches introduce noise manually and may not fully exploit the latent potential in multi-modal information. To bridge the gap, we propose a novel methodology for a comprehensive integration of both multi-modal features and collaborative signals, termed Modality-Guided Collaborative Filtering (MGCF). This method delves into self-supervised signals derived from both the structural and semantic information of the features. These signals are then utilized to select and mask critical interactions adaptively. In our pursuit of generating discriminative representations, we employ a masked auto-encoder to distill informative self-supervision signals. Simultaneously, we aggregate global information through the process of reconstructing the masked subgraph structures. We evaluate the effectiveness of MGCF through extensive experiments on real-world datasets and verify the superiority of our method for multi-modal recommendation over various state-of-the-art baselines.
Loading