Keywords: Trustworthy Graph Learning;Distributionally Robust Optimization;Robustness;Fairness
TL;DR: A unified distributionally robust framework that enhances the trustworthiness of graph neural networks by jointly modeling robustness and fairness under data distributional uncertainty.
Abstract: Graph Neural Networks (GNNs) face growing demands for trustworthiness, encompassing robustness, fairness, etc. However, these dimensions are often undermined by various perturbations, which induce distributional uncertainty and compromise the trustworthiness of GNNs. To address this, we propose DICT, a novel framework that models distributional uncertainty to achieve trustworthy graph learning. Specifically, DICT formulates a unified optimization objective that captures perturbation-induced distributional shifts in graph topology, node features, and labels, and minimizes the worst-case risk over the uncertainty set. To make the primal infinite-dimensional problem tractable, we integrate strong duality and local Lipschitz continuity of the loss to reformulate the objective as a finite-dimensional min-max problem. We focus on robustness and fairness as primary instantiations of DICT because they are not only critical in real-world applications, but also provide transferable modeling principles for broader trustworthiness objectives. By formulating fairness in the form of an uncertainty set, DICT pioneers unified robustness and fairness within a single optimization framework. Extensive experiments across diverse benchmarks and GNN backbones demonstrate that DICT consistently improves both robustness and fairness, validating the effectiveness and adaptability of the DICT framework. We envision uncertainty constraints as a foundational principle for trustworthy graph learning and a step toward broader advancements in trustworthy AI.
Supplementary Material: zip
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 9268
Loading