GPEN: Global Positional Encoding Network for Graphs

TMLR Paper2431 Authors

27 Mar 2024 (modified: 17 Sept 2024)Rejected by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Non-grid-structured data, \eg, citation networks, social networks, and web page networks, is often represented as graphs. However, such data cannot fit into Convolutional Neural Networks (CNNs) like images because of the variable size of unordered nodes and the uncertain number of neighbours for each node. Thus, Graph Neural Networks (GNNs) have been designed. They use a message-passing scheme to aggregate each node's and its neighbours' feature representations, regardless of the number of nodes and their order. Introducing feature-independent encoding methods to GNNs is crucial to preserving graphs' structural information and making node representations more discriminative. However, local-distance-aware methods, \eg, DE-GNN, only contain the information within subgraphs, resulting in ambiguity when comparing two subgraphs with the same structure. In this paper, our Global Positional Encoding Network (GPEN) is proposed to embed each node's global positional information by calculating their distances to a set of randomly sampled referential nodes. We employ contrastive loss on pairwise distances of different nodes to make positional representations more discriminative while retaining the relative interactions between nodes. We assessed our GPEN on node classification tasks by incorporating the encoding method into backbone GNNs. Our results demonstrate that it exceeds state-of-the-art encoding methods on GNN benchmark datasets with up to 34.26\% accuracy.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Moshe_Eliasof1
Submission Number: 2431
Loading