Abstract: Real life data often arises from relational structures that are best modeled by graphs. Bayesian learning on graphs has emerged as a framework which allows us to model prior beliefs about network data in a mathematically principled way. The approach provides uncertainty estimates and can perform very well on a small sample size when provided with an informative prior. Much of the work on Bayesian graph neural networks (GNNs) has focused on inferring the structure of the underlying graph and the model weights. Although research effort has been directed towards network architecture search for GNNs, existing strategies are not Bayesian and return a point estimate of the optimal architecture. In this work, we propose a method for principled Bayesian modelling for GNNs that allows for inference of a posterior over the architecture (number of layers, number of active neurons, aggregators, pooling), the graph, and the model parameters. We evaluate our proposed method on three mainstream datasets.
0 Replies
Loading