Keywords: graph embedding neural networks (GENN), graph coordinates, graph analytics, network embedding, graph neural networks (GNN)
TL;DR: We address the problem of ML for graphs, where training data is obtained from one view of the graph, and the trained NN model is reused on other views of the graph without retraining.
Abstract: When a graph is massive or when observability and privacy constraints prevent access to the entire topology, ML models must be trained using only partial information related to the topology. Such models lack reusability when the same graph is specified using a different partial set of measurements or on different subgraphs. We present an approach to make node representations comparable across different graph views produced from the same underlying topology, and use it with Graph Embedding Neural Networks (GENNs) on the OGBN-products benchmark dataset to evaluate its effectiveness. The topology of the graph or a subgraph is captured using the distance to a very small set of anchor nodes, resulting in a view of the graph that depends on the anchors. The dimensionality of these measurements is even further reduced using SVD, and the resulting topology coordinates are used in a GENN scheme. Reusing this model to make predictions on different views of the graph does not produce accurate results. By using a Procrustes transform to align a very small set of reference nodes in views obtained from different sets of anchors, we demonstrate that the models trained on one view can make predictions on the graph based on a different view with about the same accuracy. We also show that the proposed method is accurate when the different views are obtained from different subgraphs with some overlap. The approach requires only a few reference nodes, is compatible with any neural network classifier, and is particularly suitable for privacy-sensitive or federated settings where only projections or a small set of reference nodes can be shared.
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 21841
Loading