Keywords: Graph Neural Networks, Conformal Prediction, Federated Learning
TL;DR: We extend Conformal Prediction to federated graph learning and propose a federated VAE for neighbor reconstruction, improving uncertainty quantification while maintaining coverage guarantees.
Abstract: Uncertainty quantification is essential for reliable federated graph learning, yet existing methods struggle with decentralized and heterogeneous data. In this work, we first extend Conformal Prediction (CP), a well-established method for uncertainty quantification, to federated graph learning, formalizing conditions for CP validity under partial exchangeability across distributed subgraphs. We prove that our approach maintains rigorous coverage guarantees even with client-specific data distributions. Building on this foundation, we address a key challenge in federated graph learning: missing neighbor information, which inflates CP set sizes and reduces efficiency. To mitigate this, we propose a variational autoencoder (VAE)-based architecture that reconstructs missing neighbors while preserving data privacy. Empirical evaluations on real-world datasets demonstrate the effectiveness of our method: our theoretically grounded federated training strategy reduces CP set sizes by 15.4\%, with the VAE-based reconstruction providing an additional 4.9\% improvement, all while maintaining rigorous coverage guarantees.
Supplementary Material: zip
Latex Source Code: zip
Signed PMLR Licence Agreement: pdf
Readers: auai.org/UAI/2025/Conference, auai.org/UAI/2025/Conference/Area_Chairs, auai.org/UAI/2025/Conference/Reviewers, auai.org/UAI/2025/Conference/Submission483/Authors, auai.org/UAI/2025/Conference/Submission483/Reproducibility_Reviewers
Submission Number: 483
Loading