Disentangled Graph Spectral Domain Adaptation

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: The distribution shifts and the scarcity of labels prevent graph learning methods, especially graph neural networks (GNNs), from generalizing across domains. Compared to Unsupervised Domain Adaptation (UDA) with embedding alignment, Unsupervised Graph Domain Adaptation (UGDA) becomes more challenging in light of the attribute and topology entanglement in the representation. Beyond embedding alignment, UGDA turns to topology alignment but is limited by the ability of the employed topology model and the estimation of pseudo labels. To alleviate this issue, this paper proposed a Disentangled Graph Spectral Domain adaptation (DGSDA) by disentangling attribute and topology alignments and directly aligning flexible graph spectral filters beyond topology. Specifically, Bernstein polynomial approximation, which mimics the behavior of the function to be approximated to a remarkable degree, is employed to capture complicated topology characteristics and avoid the expensive eigenvalue decomposition. Theoretical analysis reveals the tight GDA bound of DGSDA and the rationality of polynomial coefficient regularization. Quantitative and qualitative experiments justify the superiority of the proposed DGSDA.
Lay Summary: Imagine training a recommendation system on Twitter to predict user interests, then deploying it on Facebook. Even though both are social platforms, differences in attributes (user profiles) and topology (connection patterns) cause performance drops. Traditional adaptation methods blend these two aspects together, trying to adjust both at the same time, which leads to suboptimal adaptation. In this work, we propose a method named DGSDA, which separately aligns topology and attributes. We begin with attribute alignment using a conventional method, and we then focus on addressing the differences in topology patterns. However, existing methods are limited by the ability of the employed model and the estimation of pseudo-labels. Fortunately, the spectral information of a network is closely related to its topology. Therefore, instead of directly aligning the topology of the two networks, we align their graph spectral filters by adjusting the polynomial parameters of the filters, which is equivalent to adjusting the spectral information in the network. DGSDA is not only more flexible but also avoids costly computations.
Primary Area: Deep Learning->Graph Neural Networks
Keywords: graph neural networks, unsupervised domain adaptation, disentangled learning, spectral domain
Submission Number: 5466
Loading