Primary Area: learning on graphs and other geometries & topologies
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: graph learning, graph neural networks, gnn, multigpu training
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: Significant computational resources are required to train Graph Neural Networks (GNNs) at a large scale,
and the process is highly data-intensive.
One of the most effective ways to reduce resource requirements is minibatch training
coupled with graph sampling.
GNNs have the unique property that items in a minibatch have overlapping data.
However, the commonly implemented Independent Minibatching approach assigns each Processing
Element (PE) its own minibatch to process, leading to duplicated computations and input data access across PEs.
This amplifies the Neighborhood Explosion Phenomenon (NEP), which is the main bottleneck limiting scaling.
To reduce the effects of NEP in the multi-PE setting,
we propose a new approach called Cooperative Minibatching.
Our approach capitalizes on the fact that the size of the sampled subgraph is a concave function of the batch size, leading to
significant reductions in the amount of work per seed vertex as batch sizes increase. Hence, it is favorable for
processors to work on a large minibatch together as a single larger processor, instead of working on separate smaller
minibatches, even though global batch size is identical.
We also show how to take advantage of the same phenomenon in serial execution by generating dependent consecutive minibatches.
Our experimental evaluations show up to 4x bandwidth savings for fetching vertex embeddings, by simply increasing
this dependency without harming model convergence. Combining our proposed approaches, we achieve up to 64\%
speedup over Independent Minibatching on single-node multi-GPU systems and show
that load balancing is not an issue despite the use of lock-step communication.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
Supplementary Material: zip
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 4367
Loading