Keywords: Distribution shift, Performance Estimation, Domain Adaptation, Adaptation Bound
TL;DR: We introduce overlap-awareness in prior work's disagreement discrepancy to improve the prediction accuracy of error bounds in unlabeled target domains.
Abstract: Reliable and accurate estimation of the error of an ML model in unseen test domains is an important problem for safe intelligent systems. Prior work uses \textit{disagreement discrepancy} (\disdis) to derive practical error bounds under distribution shifts. It optimizes for a maximally disagreeing classifier on the target domain to bound the error of a given source classifier. Although this approach offers a reliable and competitively accurate estimate of the target error, we identify a problem in this approach which causes the disagreement discrepancy objective to compete in the overlapping region between source and target domains. With an intuitive assumption that the target disagreement should be no more than the source disagreement in the overlapping region due to high enough support, we devise Overlap-aware Disagreement Discrepancy (\odd). Our \odd-based bound uses domain-classifiers to estimate domain-overlap and better predicts target performance than \disdis. We conduct experiments on a wide array of benchmarks to show that our method improves the overall performance-estimation error while remaining valid and reliable. Our code and results are available on \href{https://github.com/aamixsh/odd}{GitHub}.
Latex Source Code: zip
Code Link: https://github.com/aamixsh/odd
Signed PMLR Licence Agreement: pdf
Readers: auai.org/UAI/2025/Conference, auai.org/UAI/2025/Conference/Area_Chairs, auai.org/UAI/2025/Conference/Reviewers, auai.org/UAI/2025/Conference/Submission718/Authors, auai.org/UAI/2025/Conference/Submission718/Reproducibility_Reviewers
Submission Number: 718
Loading