Track: Main paper track (up to 5 pages excluding references and appendix)
Keywords: Graph Neural Networks, Low-Rank Adapters, Fine-tuning
Abstract: We introduce a new low-rank graph adapter, GConv-Adapter, that leverages a two-fold normalized graph convolution and trainable low-rank weight matrices to achieve state-of-the-art (SOTA) and near-SOTA performance in GNN fine-tuning for standard message-passing neural networks (MPNNs) and graph transformers (GTs) in both inductive and transductive learning. We motivate our design by deriving an upper bound on the adapter's Lipschitz constant for $\delta$-regular random (expander) graphs, and we compare it against previous methods which we show to be unbounded.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Submission Number: 66
Loading