Learning Latent Graph Structures and their Uncertainty

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: Accurate probabilistic graph structure learning requires specific loss functions. We identify a broad class supported by learning guarantees.
Abstract: Graph neural networks use relational information as an inductive bias to enhance prediction performance. Not rarely, task-relevant relations are unknown and graph structure learning approaches have been proposed to learn them from data. Given their latent nature, no graph observations are available to provide a direct training signal to the learnable relations. Therefore, graph topologies are typically learned on the prediction task alongside the other graph neural network parameters. In this paper, we demonstrate that minimizing point-prediction losses does not guarantee proper learning of the latent relational information and its associated uncertainty. Conversely, we prove that suitable loss functions on the stochastic model outputs simultaneously grant solving two tasks: (i) learning the unknown distribution of the latent graph and (ii) achieving optimal predictions of the target variable. Finally, we propose a sampling-based method that solves this joint learning task. Empirical results validate our theoretical claims and demonstrate the effectiveness of the proposed approach.
Lay Summary: Some Deep Learning models, particularly Graph Neural Networks, utilize relational information as an effective inductive bias. However, task-relevant relations are often unknown. While approaches exist to learn these hidden connections as part of the model's main task, accurately learning these latent relationships and their associated uncertainty remains a major challenge, especially without direct supervision of the true underlying structures. Learning latent relationships can shed light on hidden structures and support more informed decision-making. For example, this can be valuable for understanding how information spreads through a social or physical network, or for analyzing complex biological systems where interactions between components are not always observable. In this paper, we show that learning accurate probabilistic relationships requires the use of specific loss functions. In particular: 1. We demonstrate that commonly used loss functions - even if probabilistic - do not ensure accurate learning of latent relational structures, when they focus solely on point predictions. 2. We show that a different, yet broad, class of loss functions offers stronger guarantees while maintaining accurate point predictions. Empirical analyses support these theoretical findings.
Link To Code: https://github.com/allemanenti/Learning-Calibrated-Structures
Primary Area: Deep Learning->Graph Neural Networks
Keywords: Graph Structure Learning, Graph Neural Networks
Submission Number: 10701
Loading