GPEN: Global Positional Encoding Network for Graphs

TMLR Paper2431 Authors

27 Mar 2024 (modified: 28 Mar 2024)Under review for TMLREveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Non-grid-structured data, e.g., citation networks, social networks, and web page networks, is often represented as graphs. However, such data cannot fit into Convolutional Neural Networks (CNNs) like images because of the variable size of unordered nodes and the uncertain number of neighbours for each node. Thus, Graph Neural Networks (GNNs) have been designed. They use a message-passing scheme to aggregate each node's and its neighbours' feature representations, regardless of the number of nodes and their order. Introducing feature-independent encoding methods to GNNs is crucial to preserving graphs' structural information and making node representations more discriminative. However, local-distance-aware methods, \eg, DE-GNN, only contain the information within subgraphs, resulting in ambiguity when comparing two subgraphs with the same structure. In this paper, our Global Positional Encoding Network (GPEN) is proposed to embed each node's global positional information by calculating their distances to a set of randomly sampled referential nodes. We employ contrastive loss on pairwise distances of different nodes to make positional representations more discriminative while retaining the relative interactions between nodes. We evaluate our GPEN on node classification datasets by inserting the encoding scheme into a backbone GNN and demonstrate that it outperforms state-of-the-art encoding methods on homophilic graph grains by up to 33.12% in accuracy.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Moshe_Eliasof1
Submission Number: 2431
Loading