Structure-preserving contrastive graph clustering with dual-channel label alignment

Published: 2026, Last Modified: 29 Oct 2025Neural Networks 2026EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The past few years have witnessed the rapid development of contrastive graph clustering (CGC). Although a series of achievements have been made, there still remain two challenging problems in the literature. First, previous works typically generate different views via some pre-defined graph augmentation strategies, but inappropriate augmentations may alter the latent semantics of the original data. Second, they often overlook the discriminative unsupervised information when constructing positive and negative sample pairs, resulting in compromised clustering performance. Third, some of them are restricted to only static neighborhood connections for contrastive learning, which neglect the dynamical structural relationship via robust neighboring graph learning. To cope with these issues, this paper proposes a Structure-preserving Contrastive Graph Clustering approach with Dual-channel Label Alignment (SCGC-DLA). In terms of the high-and-low frequency issues, the low-pass and hybrid graph filters are designed for generating two views of reliable augmentations, which can supply rich and complementary information to each other. Further, we construct a structure-preserving matrix, which is derived from the edge betweenness centrality (EBC) perspective design and allows us to efficiently capture the topological relationships among different embedding representations. Under the guidance of the non-dominated sorting theory, the clustering distribution information of dual-channel is used to construct high-confidence pseudo labels. Especially, the generated high-confidence pseudo labels are aligned with latent semantic labels. Finally, the overall network is guided by a self-supervised learning scheme and therefore the final clustering could be obtained. Substantial results on five benchmarks prove the robustness and effectiveness of our approach compared to several state-of-the-arts.
Loading