Self-supervised disentangled representation learning with distribution alignment for multi-view clustering

Published: 01 Jan 2025, Last Modified: 10 Apr 2025Digit. Signal Process. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Recently, multi-view clustering has attracted much attention due to its strong capability to fully explore complementary information between multiple views. In general, there may be differences in feature distribution between views from different data sources. However, most existing methods usually directly fuse different views, ignoring the difference in contribution and importance of different views. Thus, it leads to mutual interference between common representation and view-specific information. To address these issues, in this paper, we propose a novel method, called self-supervised disentangled representation learning with distribution alignment (S2DRL-DA), for multi-view clustering. Firstly, the proposed method uses adversarial learning and attention mechanisms to align potential feature distributions and focus on the most critical view. Then the disentangled representation learning is used to separate common and specific representations learned from each view to reduce redundancy in multi-view data. Finally, we adopt KL divergence to assess the quality of the clustering result of each view and guide the model optimization. Extensive experiments on different datasets demonstrate that our S2DRL-DA approach produces competitive performance in multi-view clustering applications. The source code for this work can be found at https://github.com/szq0816/S2DRL-DA.
Loading