TL;DR: Powerful conformal outlier detection framework under contaminated data, utilizing a limited annotation budget.
Abstract: Conformal prediction is a flexible framework for calibrating machine learning predictions, providing distribution-free statistical guarantees. In outlier detection, this calibration relies on a reference set of labeled inlier data to control the type-I error rate. However, obtaining a perfectly labeled inlier reference set is often unrealistic, and a more practical scenario involves access to a contaminated reference set containing a small fraction of outliers. This paper analyzes the impact of such contamination on the validity of conformal methods. We prove that under realistic, non-adversarial settings, calibration on contaminated data yields conservative type-I error control, shedding light on the inherent robustness of conformal methods. This conservativeness, however, typically results in a loss of power. To alleviate this limitation, we propose a novel, active data-cleaning framework that leverages a limited labeling budget and an outlier detection model to selectively annotate data points in the contaminated reference set that are suspected as outliers. By removing only the annotated outliers in this ``suspicious'' subset, we can effectively enhance power while mitigating the risk of inflating the type-I error rate, as supported by our theoretical analysis. Experiments on real datasets validate the conservative behavior of conformal methods under contamination and show that the proposed data-cleaning strategy improves power without sacrificing validity.
Lay Summary: Machine learning systems are often used in decision-making---such as in detecting financial fraud. In such cases, false alarms---mistakenly flagging legitimate transactions as fraudulent---can lead to costly investigations and therefore need to be controlled. Conformal prediction is a flexible framework that controls the false alarm rate while providing statistical guarantees. It does this by learning what ``normal’’ samples look like from a reference set.
But what if the reference set---presumed to contain solely legitimate transactions---accidentally includes frauds? This kind of contamination is common in practice, and it raises an important question: how does it affect the system’s ability to reliably control its false alarm rate?
We show that conformal prediction remains robust and reliable even when the reference set is contaminated. In fact, it becomes overly conservative and makes fewer detections, which limits the system’s practical value.
To address this, we introduce a method that partially ``cleans’’ the reference data by asking a human expert to check just a small number of the most suspicious examples. By removing only those flagged as outliers, we make the system more powerful while still providing error control guarantees.
Link To Code: https://github.com/Meshiba/robust-conformal-od
Primary Area: Probabilistic Methods
Keywords: Conformal Prediction, Hypothesis Testing, Out-of-Distribution Detection, Contaminated Data
Submission Number: 7611
Loading