Degree-aware Spiking Graph Domain Adaptation for Classification

ICLR 2025 Conference Submission13906 Authors

28 Sept 2024 (modified: 26 Nov 2024)ICLR 2025 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: spiking graph neural network; domain adaptation
Abstract: Spiking Graph Networks (SGNs) have garnered significant interest from both researchers and industry due to their ability to address energy consumption challenges in graph classification. However, SGNs are typically inference under the same distribution of training dataset, which is difficult to satisfy in real applications. In this paper, we first propose the domain adaptation problem in SGNs, and introduce the novel framework named \textbf{De}gree-aware \textbf{S}piking \textbf{G}raph \textbf{D}omain \textbf{A}daptation for Classification (\method{}). To address this problem, we propose solutions in terms of three aspects: node distribution-aware personalized spiking representation, graph feature distribution alignment, and pseudo-label distillation. Firstly, we introduce the personalized spiking representation method that varies with node degrees. The difficulty of triggering a spike is determined by the node degree, allowing this personalized approach to capture more expressive information for classification. Then, we propose the graph feature distribution alignment module that is adversarially trained using membrane potential against a domain discriminator, efficiently maintaining high performance and low energy consumption in the case of inconsistent distribution. Additionally, we extract consistent predictions across two spaces to create reliable pseudo-labels, effectively leveraging unlabeled data to enhance graph classification performance. Extensive experiments on benchmark datasets validate the superiority of the proposed \method{} compared with baselines.
Primary Area: other topics in machine learning (i.e., none of the above)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 13906
Loading