Fairness-Aware Mutual Information for Multimodal Recommendation

Published: 01 Jan 2024, Last Modified: 12 May 2025BESC 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In recent years, multimodal recommendations have attracted considerable attention due to the increasing presence of multimedia information. Leveraging multimodal information helps alleviate the data sparsity issue in conventional recom-mender systems, thereby enhancing recommendation accuracy. However, integrating multimodal information introduces the additional challenge of managing sensitive information, as these data may implicitly or explicitly convey sensitive attributes about users, potentially exacerbating fairness issues. While existing methods have addressed fairness issues in recommender systems, most neglect the potentially sensitive information within modalities. In this light, we propose a modality-guided representation learning framework using fairness-aware mutual information to disentangle sensitive and non-sensitive information from modal embeddings. Specifically, we adopt a dual mutual information objective to decompose modal embeddings, capturing sensitive information while encouraging embeddings to contain as much non-sensitive information as possible. These disentangled embeddings are then used to enhance user representations. Extensive experiments on two public datasets have demonstrated the effectiveness of our approach.
Loading