Abstract: Graph Neural Networks have demonstrated remarkable effectiveness in various graph-based tasks, but their inefficiency in training and inference poses significant challenges for scaling to real-world, large-scale applications. To address these challenges, a plethora of algorithms have been developed to accelerate GNN training and inference, garnering substantial interest from the research community. This paper presents a systematic review of these acceleration algorithms, categorizing them into three main topics: training acceleration, inference acceleration, and execution acceleration. For training acceleration, we discuss techniques like graph sampling and GNN simplification. In inference acceleration, we focus on knowledge distillation, GNN quantization, and GNN pruning. For execution acceleration, we explore GNN binarization and graph condensation. Additionally, we review several libraries related to GNN acceleration, including our Scalable Graph Learning library, and propose future research directions.
External IDs:dblp:journals/tkde/MaSLGHYNJZC25
Loading