A Dual-View Contrastive Learning Framework for Heterogeneous Graph Representation Learning

16 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Multi-instance/Multi-view Learning, Representation Learning
Abstract: Heterogeneous graph representation learning leverage the rich semantics and complex structural relationships within heterogeneous graphs. However, existing methods often fail to capture long-range semantic dependencies and localized structural patterns simultaneously. Therefore, we propose a novel Dual-View Contrastive Learning framework (DVCL) for heterogeneous graph representation learning. Specifically, the Graph Schema View Module (GSVM) is conducted to model the structural dependencies by leveraging a relational graph neural network with type-aware message passing and adaptive residual connections. Then, the Semantic Meta-Path Mamba Module (SMPMM) is designed to capture high-order semantic dependencies through a globally enhanced Mamba backbone, equipped with multi-resolution fusion and directional positional encodings. Moreover, a dynamic bidirectional contrastive learning is constructed to integrate the semantic view and structural view, treating each view as a learnable augmentation of the other to ensure robust and complementary representations. Extensive experiments on four datasets demonstrate that the proposed method consistently outperforms state-of-the-art methods, in terms of classification and clustering tasks.
Supplementary Material: zip
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 7632
Loading