Relationship Alignment for View-aware Multi-view Clustering

Published: 26 Jan 2026, Last Modified: 02 Mar 2026ICLR 2026 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Relationship Alignment; View-Aware Contrastive Learning; Multi-View Clustering
Abstract: Multi-view clustering improves clustering performance by integrating complementary information from multiple views. However, existing methods often suffer from two limitations: i) the neglect of preserving sample neighborhood structures, which weakens the consistency of inter-sample relationships across views; and ii) inability to adaptively utilize inter-view similarity, resulting in representation conflicts and semantic degradation. To address these issues, we propose a novel framework named Relationship Alignment for View-aware Multi-view Clustering (RAV). Our approach first constructs view-specific sample relationship matrices from deep features and aligns them with the global relationship matrix to enhance cross-view neighborhood consistency and facilitate accurate measurement of inter-view similarity. Simultaneously, we introduce a view-aware adaptive weighting mechanism for label contrastive learning that dynamically adjusts the contrastive intensity between view pairs based on deep-feature similarity: higher-similarity views lead to stronger label alignment, while lower-similarity views reduce the weighting to prevent enforcing agreement. This strategy promotes cluster-level semantic consistency while preserving natural inter-view relationships. Extensive experiments demonstrate that our method consistently outperforms state-of-the-art approaches on multiple benchmark datasets. Project website: https://github.com/chenzhe207/RAV.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 19765
Loading