Training speedups via batching for geometric learning: an analysis of static and dynamic algorithms

Published: 05 Nov 2025, Last Modified: 05 Nov 2025AI4Mat-NeurIPS-2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Graph neural networks, materials science, batching
Abstract: Graph neural networks (GNN) have shown promising results for several domains such as materials science, chemistry, and the social sciences. GNN models often contain millions of parameters, and like other neural network (NN) models, are often fed only a fraction of the graphs that make up the training dataset in batches to update model parameters. The effect of batching algorithms on training time and model performance has been thoroughly explored for NNs but not yet for GNNs. We analyze two different batching algorithms for graph based models, namely static and dynamic batching for two datasets, the QM9 dataset of small molecules and the AFLOW materials database. Our experiments show that changing the batching algorithm can provide up to a 2.7x speedup, but the fastest algorithm depends on the data, model, batch size, hardware, and number of training steps run. Experiments show that for a select number of combinations of batch size, dataset, and model, significant differences in model learning metrics are observed between static and dynamic batching algorithms.
Submission Track: Paper Track (Full Paper)
Submission Category: AI-Guided Design
Institution Location: {Berlin, Germany}, {Munich, Germany}, {London, U.K.}
AI4Mat Journal Track: Yes
AI4Mat RLSF: Yes
Submission Number: 50
Loading