On the Characterization of GraphML Frameworks: The Case of Semi-Supervised Node Classification

Published: 01 Jan 2025, Last Modified: 18 Jul 2025ISCAS 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In recent years, the application of Machine Learning techniques on graphs has produced a considerable interest, leading to the development of many Graph Machine Learning (GraphML) frameworks. However, the proper framework has to be selected depending on the application, requiring time and resources. To solve this issue, this work characterizes three GraphML frameworks: PyTorch Geometric (PyG), Deep Graph Library (DGL), and Stellargraph on four different GPU architectures on the task of Semi-Supervised Node Classification. We compare both the training and inference time and the accuracy and loss curves for each configuration under identical model setups. Results show that PyTorch-based frameworks are faster than those using TensorFlow. Furthermore, we evaluate how DGL has a steeper convergence and can outperform PyG in the presented case study, while the training time per epoch of PyG is faster. Additionally, our evaluation highlights how the frameworks only sometimes fully exploit newer generations of server-grade GPUs. This study demonstrates how selecting the most suitable GraphML framework is a multifaced problem that can directly impact the performance for the end-user.
Loading