Synergistic fusion framework: Integrating training and non-training processes for accelerated graph convolution network-based recommendation
Abstract: The training and inference (generating recommendation lists) of Graph convolution networks (GCN)-based recommendation models are time-consuming. Existing techniques aim to improve the training speed by proposing new GCN variants. However, the development of GCN leads to multiple technological branches using graph-enhancement techniques, including subgraph and edge sampling techniques. Simply proposing a GCN variant for training acceleration is inadequate, lacking a generalized training acceleration framework for multiple GCN models. Another weakness of previous studies is neglecting the importance of inference speed. This study introduces a candidate-based fusion framework to accelerate the training and inference of GCN models. The idea for training acceleration is to achieve layer compression by aggregating information directly from candidate items generated in a non-training process. Besides, we achieve inference acceleration by ranking items only in the candidate sets. The proposed framework is generalized across six state-of-the-art GCN models. Experimental results confirm the effectiveness of the method.
Loading