Two Lenses are Better Than One: Dual Vector Quantization for Self-Supervised Graph Learning

15 Sept 2025 (modified: 21 Nov 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Graph Neural Network
Abstract: Graph Contrastive Learning (GCL) has emerged as a powerful paradigm for learning node representations without explicit labels. A key aspect of GCL is the generation of diverse and informative views for contrast. In this work, we introduce DualVC (Dual Vector-Quantized Contrastive Learning), a novel GCL framework that employs two distinct vector-quantized (VQ) codebooks to enrich the representation learning process. DualVC first utilizes a Graph Neural Network (GNN) encoder to generate continuous node embeddings. These embeddings are then simultaneously mapped to two separate discrete latent spaces via two independent VQ layers, each associated with its own learnable codebook. We posit that these dual codebooks can either capture complementary facets of the node representations or provide diverse "discretization perspectives," thereby fostering the learning of more robust and discriminative features. The quantized representations, after being passed through projection heads, are then used as positive pairs for a contrastive loss objective, encouraging alignment for the same node while promoting separability from other nodes. The inherent discrete bottleneck imposed by the VQ layers also facilitates the learning of compact representations and can improve generalization. We plan to demonstrate that DualVC achieves competitive performance on various graph learning benchmarks, highlighting the benefits of integrating multiple learned discrete bottlenecks within a contrastive framework. Our contributions include the novel dual-codebook VQ architecture for GCL and an empirical validation of its effectiveness.
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 6210
Loading