VIDGCN: Embracing input data diversity with a configurable graph convolutional network accelerator

Published: 01 Jan 2023, Last Modified: 18 May 2025J. Syst. Archit. 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Hardware accelerated inference is a promising solution for exploiting graph convolutional networks (GCN) in latency-sensitive applications. Existing accelerators overlook an important barrier to widespread adoption: the input data (i.e., weighted graphs) of GCN inference diverge from scale and sparsity, causing the accelerators optimized for an array of graphs to lose efficiency on other graphs. This paper presents a reconfigurable GCN inference accelerator, VIDGCN, that switches between all possible GCN inference computation schemes to realize timely inference for all input graphs. VIDGCN incorporates an analytical performance model and a reconfigurable hardware design. The performance model allows users to find the optimal computation scheme for any given input graph. The hardware design reuses all the computation units under all computation schemes, and only distributes the data to the units in different ways. Evaluation on seven real-world graphs shows that VIDGCN outperforms state of the art, SGCNAX, by 1.79×, and consistently yields the ideal amount of memory accesses.
Loading