Revisiting Random Walks for Learning on Graphs

Published: 17 Jun 2024, Last Modified: 13 Jul 2024ICML 2024 Workshop GRaMEveryoneRevisionsBibTeXCC BY 4.0
Track: Extended abstract
Keywords: Graph machine learning, random walk, language models, invariance, universal approximation
TL;DR: We revisit a simple idea where a random walk on a graph produces a machine-readable record which is processed by a deep neural network to directly make vertex-level or graph-level predictions.
Abstract: We revisit a simple idea for machine learning on graphs, where a random walk on a graph produces a machine-readable record, and this record is processed by a deep neural network to directly make vertex-level or graph-level predictions. We call these stochastic machines random walk neural networks, and show that we can design them to be isomorphism invariant while capable of universal approximation of graph functions in probability. Notably, almost any record of random walk guarantees probabilistic invariance as long as vertices are anonymized. This enables us to record random walks in plain text and adopt a language model to read these text records to solve graph tasks. We demonstrate random walk neural networks based on pre-trained language models on several hard problems on graphs: separating strongly regular graphs where 3-WL fails, counting substructures, and transductive classification on arXiv citation network without training. Code is at https://github.com/jw9730/random-walk.
Submission Number: 80
Loading