Full-Precision Free Binary Graph Neural NetworksDownload PDF

29 Sept 2021 (modified: 13 Feb 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Graph Neural Networks, Binary Neural Networks, Mixture of Experts
Abstract: Binary neural networks have become a promising research topic due to their fast inference speed and low energy consumption advantages. However, most existing works focus on binary convolutional neural networks, while less attention has been paid to binary graph neural networks. A common drawback of existing works on binary graph neural networks is that they still include lots of inefficient full-precision operations and hence are not efficient enough. In this paper, we propose a novel method, called full-precision free binary graph neural networks (FFBGN), to avoid full-precision operations for binarizing graph neural networks. To address the challenges introduced by re-quantization which is a necessary procedure for avoiding full-precision operations, in FFBGN we first study the impact of different computation orders to find an effective computation order and then introduce mixture of experts to increase the model capacity. Experiments on three large-scale datasets show that performing re-quantization in different computation orders significantly impacts the performance of binary graph neural network models, and FFBGN can outperform other baselines to achieve state-of-the-art performance.
One-sentence Summary: This is a work for constructing effective binary graph neural networks.
4 Replies

Loading