E2G-Net: Enhancing Efficiency in Graph Neural Networks With Early-Exit Branches

Published: 04 Jun 2025, Last Modified: 27 Jan 2026IEEE Transactions on Emerging Topics in Computational IntelligenceEveryoneCC BY 4.0
Abstract: Graph Neural Networks (GNNs) are effective for learning on graph-structured data but often suffer from high inference costs, particularly in deeper architectures. Standard GNNs employ a single-exit design, processing all inputs through the entire network regardless of their complexity—resulting in unnecessary computation for simpler instances. This paper introduces E2G-Net, a multi-exit GNN architecture that inserts early-exit branches at intermediate layers to enable instance-adaptive inference. A Bayesian Optimization (BO)-based policy determines the optimal exit criterion and threshold at each branch, optimizing the trade-off between accuracy and efficiency. E2G-Net is evaluated using GCN and GAT backbones on ten node classification benchmarks spanning homophilic, heterophilic, and large-scale graphs. It achieves up to 3.7× inference speedup (Cornell) and over 45% FLOPs reduction (OGBN-Arxiv), while preserving and often improving classification accuracy across datasets. These results demonstrate E2G-Net's scalability and efficiency for real-world graph inference.
Loading