Adapting Prediction Sets to Distribution Shifts Without Labels

Published: 07 May 2025, Last Modified: 13 Jun 2025UAI 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Set-valued classification, Test time adaptation
TL;DR: We propose a method for improving set-valued classifiers under distribution shift by leveraging the base model's uncertainty on the target data.
Abstract: Recently there has been a surge of interest to deploy confidence set predictions rather than point predictions in machine learning. Unfortunately, the effectiveness of such prediction sets is frequently impaired by distribution shifts in practice, and the challenge is often compounded by the lack of ground truth labels at test time. Focusing on a standard set-valued prediction framework called conformal prediction (CP), this paper studies how to improve its practical performance using only unlabeled data from the shifted test domain. This is achieved by two new methods called $\texttt{ECP}$ and $\texttt{E{\small A}CP}$, whose main idea is to adjust the score function in CP according to its base model's own uncertainty evaluation. Through extensive experiments on a number of large-scale datasets and neural network architectures, we show that our methods provide consistent improvement over existing baselines and nearly match the performance of fully supervised methods.
Latex Source Code: zip
Code Link: https://github.com/uoguelph-mlrg/EaCP
Signed PMLR Licence Agreement: pdf
Readers: auai.org/UAI/2025/Conference, auai.org/UAI/2025/Conference/Area_Chairs, auai.org/UAI/2025/Conference/Reviewers, auai.org/UAI/2025/Conference/Submission297/Authors, auai.org/UAI/2025/Conference/Submission297/Reproducibility_Reviewers
Submission Number: 297
Loading