A double-layer attentive graph convolution networks based on transfer learning for dynamic graph classification

Published: 01 Jan 2024, Last Modified: 07 Feb 2025Int. J. Mach. Learn. Cybern. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In practical scenarios, many graphs dynamically evolve over time. The new node classification without labels and historical information is challenging. To address this challenge, we design a double-layer attentive graph convolutional network (DLA-GCN) based on the transfer learning, which mainly includes three deep learning components: the double-layer graph convolutional network (DLGCN), node multi-parameter learning (NMPL) algorithm, and domain-adversarial transfer learning (DATL) method. In terms of dynamic spatial correlation, DLGCN jointly exploits the pre-defined and adaptive adjacency matrix to capture local and global feature aggregation. An inter-graph attention mechanism is further used to produce a unified representation for each node in graphs by automatically merging different spatial correlations. To reduce the complexity and improve accuracy, the matrix decomposition method is designed to learn the node-specific patterns of nodes in the NMPL component. In terms of dynamic time correlation, DATL is proposed to learns and transfers similar features as historical information of new nodes by optimizing three different loss functions, namely source classifier loss, domain classifier loss, and target classifier loss as a whole. The experimental results on two real-world graph classification datasets show that the proposed approach can improve the accuracy by 18% and 10%, respectively, compared with the state-of-art baselines.
Loading