Multi-View Graph Disentanglement via Joint Contrastive Optimization

18 Sept 2025 (modified: 30 Nov 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Graph Representation Learning, Graph Disentanglement, Joint Contrastive Optimization
Abstract: Graph Representation Learning (GRL) has made great progress by optimizing node representations through constructing multiple views and employing mutual information maximization or contrastive learning methods. However, existing methods typically rely on graph augmentation to construct node- and graph-level views by maximizing inter-view consistency. This strategy tends to force different views toward homogeneity, and may discard critical information in the graph data. Meanwhile, views at different hierarchical levels exhibit inherent limitations: node-level views are sensitive to noise, and graph-level views overlook local structural information. In this work, we propose $\textbf{M}$ulti-view $\textbf{G}$raph $\textbf{R}$epresentation $\textbf{L}$earning with $\textbf{D}$isentanglement via joint contrastive optimization (MGDRL). Multi-view graph disentanglement (MGD) promotes divergence among representations across different views, forming decoupled views. However, disentanglement alone may lead to meaningless representations. Therefore, we employ fuzzy self-attention mechanism to construct an aggregation graph and achieve synergistic constraints between the aggregation graph and MGD through joint contrastive optimization. Joint contrastive optimization guides decoupled views toward distinct and diverse information while also extending contrastive learning to the subgraph-level of the aggregation graph, integrating local and global information. Experimental results on benchmark datasets demonstrate the superior performance of MGDRL.
Supplementary Material: zip
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 10304
Loading