Effective multi-view representation learning for single-view attributed graph clustering

Heng Liu, Weizhi Zhao, Zhou Bao, Mingquan Ye, Caifeng Shan

Published: 01 Jul 2025, Last Modified: 09 Nov 2025Knowledge-Based SystemsEveryoneRevisionsCC BY-SA 4.0
Abstract: Recent advances in graph convolutional networks (GCNs) have increasingly used graph attention mechanisms for generating embedded feature representations for graph clustering. However, regardless of whether multi-view or single-view clustering is used, owing to the sensitivity of graph attention mechanisms to structural noise, the obtained embedded feature representations often lack robustness. Furthermore, with single-view data, existing multi-view clustering methods lose their collaborative potential when relying on a single-view graph structure, making it challenging to generate effective multi-view representations. To address these issues, this study proposes a novel single-view attributed graph clustering model - S2M, which learns effective multi-view feature embeddings. We propose a flexible and controllable data enhancement method for single-view data that constructs a multi-view graph while reducing the impact of structural noise. Subsequently, we introduce a novel approach that leverages node predictions to guide multi-view feature fusion, enabling the learning of clustering-oriented representations and deriving target distribution from high-confidence nodes. In addition, we improve clustering robustness through cross-coding, leveraging the consistency and collaboration of embeddings across multiple view branches. Extensive experiments on benchmark datasets confirm that the proposed method delivers superior clustering performance, surpassing state-of-the-art approaches. This code is available at https://github.com/hengliusky/S2M_Clustering.
Loading