Partial identification of the maximum mean discrepancy with mismeasured data

Published: 26 Apr 2024, Last Modified: 15 Jul 2024UAI 2024 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: maximum mean discrepancy, measurement error, uncertainty quantification
TL;DR: We suggest a method to quantify the uncertainty in the maximum mean discrepancy
Abstract: Nonparametric estimates of the distance between two distributions such as the Maximum Mean Discrepancy (MMD) are often used in machine learning applications. However, the majority of existing literature assumes that error-free samples from the two distributions of interest are available.We relax this assumption and study the estimation of the MMD under $\epsilon$-contamination, where a possibly non-random $\epsilon$ proportion of one distribution is erroneously grouped with the other. We show that under $\epsilon$-contamination, the typical estimate of the MMD is unreliable. Instead, we study partial identification of the MMD, and characterize sharp upper and lower bounds that contain the true, unknown MMD. We propose a method to estimate these bounds, and show that it gives estimates that converge to the sharpest possible bounds on the MMD as sample size increases, with a convergence rate that is faster than alternative approaches. Using three datasets, we empirically validate that our approach is superior to the alternatives: it gives tight bounds with a low false coverage rate.
Supplementary Material: zip
List Of Authors: Nafshi, Ron and Makar, Maggie
Latex Source Code: zip
Signed License Agreement: pdf
Code Url: https://github.com/mymakar/mmd_uncertainty
Submission Number: 458
Loading