Abstract: Existing fair multi-view clustering methods impose a constraint that requires the distribution of sensitive attributes to be uniform within each cluster. However, this constraint can lead to misallocation of samples with sensitive attributes. To solve this problem, we propose a novel Deep Fair Multi-View Clustering (DFMVC) method that learns a consistent and discriminative representation instructed by a fairness constraint constructed from the distribution of clusters. Specifically, we incorporate contrastive constraints on semantic features from different views to obtain consistent and discriminative representations for each view. Additionally, we align the distribution of sensitive attributes with the target cluster distribution to achieve optimal fairness in clustering results. Experimental results on four datasets with sensitive attributes demonstrate that our method improves both the fairness and performance of clustering compared to state-of-the-art multi-view clustering methods.
Primary Subject Area: [Content] Multimodal Fusion
Relevance To Conference: In this paper, we propose a DFMVC method that learns more discriminative and fairer representations for MVC. Specifically, we use AutoEncoder to initialize the parameters. Then, we fuse the representations extracted from each view, and by the self-training fair learning module, learn fairer representations. To further use the diversity of multi-view data, we use contrastive learning to obtain feature consistency and semantic labels. We conduct extensive experiments and ablation studies on four datasets to validate the superiority of the model and the effectiveness of each component in our method.
Submission Number: 2553
Loading