Distributed Conformal Prediction via Message Passing

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We propose two message-passing-based approaches for achieving reliable inference in a fully decentralized setting via conformal prediction.
Abstract: Post-hoc calibration of pre-trained models is critical for ensuring reliable inference, especially in safety-critical domains such as healthcare. Conformal Prediction (CP) offers a robust post-hoc calibration framework, providing distribution-free statistical coverage guarantees for prediction sets by leveraging held-out datasets. In this work, we address a decentralized setting where each device has limited calibration data and can communicate only with its neighbors over an arbitrary graph topology. We propose two message-passing-based approaches for achieving reliable inference via CP: quantile-based distributed conformal prediction (Q-DCP) and histogram-based distributed conformal prediction (H-DCP). Q-DCP employs distributed quantile regression enhanced with tailored smoothing and regularization terms to accelerate convergence, while H-DCP uses a consensus-based histogram estimation approach. Through extensive experiments, we investigate the trade-offs between hyperparameter tuning requirements, communication overhead, coverage guarantees, and prediction set sizes across different network topologies. The code of our work is released on: https://github.com/HaifengWen/Distributed-Conformal-Prediction.
Lay Summary: AI models are increasingly used in critical areas like healthcare, in which ensuring reliable predictions is important. However, this problem becomes challenging when the data needed to calibrate these models is scattered across many devices. We introduce two new methods that enable devices to collaborate and enhance their AI's trustworthiness, even if they can only communicate with their neighbors in a network. One approach allows them to quickly reach an agreement on a safety margin for AI predictions by sharing only a little information. The other involves cooperatively constructing a more detailed margin of the AI's potential errors. We help to build more reliable AI systems by calibrating them using decentralized data. We also provide the practical trade-offs of each method, such as communication needs and ease of setup, guiding better choices for real-world applications.
Link To Code: https://github.com/HaifengWen/Distributed-Conformal-Prediction
Primary Area: Theory->Probabilistic Methods
Keywords: Conformal prediction, Distributed optimization
Submission Number: 4306
Loading