Transformer-based Unsupervised Graph Domain Adaptation

ICLR 2026 Conference Submission20809 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Unsupervised Graph Domain Adaptation, Graph Transformer, Cross-Attention, Transfer Learning, Graph Neural Network
Abstract: Unsupervised Graph Domain Adaptation (UGDA) addresses the challenge of domain shift in transferring knowledge from a labeled source graph to an unlabeled target graph. The existing UGDA methods are based solely on graph neural networks (GNNs) with limited receptive fields. This characterization of UGDA methods constrains their ability to capture long-range dependencies between domains and effectively adapt to sparse graph domains. To overcome this limitation, we introduce a novel transformer-based UGDA (TUGDA) framework that sequentially integrates transformers and asymmetric GCNs to capture both global and local structural dependencies between source and target graphs. Our framework leverages a transformer backbone, enriched with centrality and spatial positional encodings to enhance structural information. We further propose a new cross-attention mechanism that explicitly aligns source and target representations, along with theoretical analysis for reducing domain divergence through Wasserstein distance minimization. Extensive experiments on six cross-domain tasks in three real-world citation graphs show significant improvements over SOTA UGDA baselines. Our results validate TUGDA's ability to learn transferable, domain-invariant representations. To address practical scenarios where privacy or security constraints restrict access to real source domains, we demonstrate that TUGDA maintains strong performance using synthetic source graphs generated by a foundational model, achieving 4-15\% improvements over a key SOTA baseline.
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 20809
Loading