Substructure-augmented Graph Transformer for Network Representation Learning

Published: 01 Jan 2024, Last Modified: 25 Jul 2025IJCNN 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Graph Transformer has attracted widespread attention in network representation learning recently. It effectively overcomes several limitations of graph neural networks (GNNs) and learns richer graph representations by extending the attention mechanism to graph data. However, most of the existing graph Transformers suffer from two issues. One is that they do not adequately consider the graph structure, and the other is that the original node features are not used in the attention computation. To this end, we propose a novel graph Transformer method for network representation learning, named Substructure-augmented Graph Transformer (SAGT), which exploits substructures to extract graph structural information. Specifically, SAGT first utilizes predefined geometric substructures to conduct structural matching within the original graph and records the number of matches for each node. Subsequently, it merges the node-matching information with the initial node features as inputs to the Transformer, thereby helping to integrate graph structure information into node representations. Moreover, We employ a synchronous learning framework of GNN and Transformer to update the node embeddings. Experimental results show that our method outperforms the existing models on seven graph prediction benchmarks.
Loading