Keywords: graph transformer, spiking neural network, vector quantization
Abstract: Graph Transformers (GTs), which simultaneously integrate message passing and self-attention mechanisms, have achieved promising empirical results in some graph prediction tasks. Although these approaches show the potential of Transformers in capturing long-range graph topology information, issues concerning the quadratic complexity and high computing energy consumption severely impair the scalability of GTs on large-scale graphs. Recently, as brain-inspired neural networks, Spiking Neural Networks (SNNs) provide an energy-saving deep learning option with lower computational and storage overhead via their unique spike-based event-driven biological neurons. Inspired by these characteristics, we propose SGHormerVQ, which bridges efficient Graph Transformers and spiking neural networks via spiking vector quantization. Spiking vector quantization generates implied codebooks with smaller sizes and higher codebook usage to assist self-attention blocks in performing efficient global information aggregation. SGHormerVQ effectively alleviates the reliance on complex machinery (distance measure, auxiliary loss, etc.) and the \textit{codebook collapse} present in previous vector quantization-based GNNs. In experiments, we compare SGHormerVQ with other state-of-the-art baselines on node classification datasets ranging from small to large. Experimental results show that SGHormerVQ has achieved competitive performances on most datasets while maintaining up to 518× faster inference speed compared to other GTs.
Primary Area: learning on graphs and other geometries & topologies
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 7122
Loading