Keywords: conformal prediction, training label noise, meta-learning
TL;DR: We analyze the impact of training label noise on the efficiency of the prediction sets and propose a new method to address it based on meta-learning.
Abstract: As a distribution-free uncertainty quantification method for machine learning models, conformal prediction constructs prediction sets with statistical coverage guarantees. However, in real deep-learning systems, the deep learners could be affected by training label noise, which leads to inefficiently large prediction sets. In this work, focusing on the classification task, we study and address such a robust learning issue within conformal prediction. We first empirically and theoretically analyze this problem. Then, to alleviate this issue, we propose an efficiency-aware conformalized meta-learning-based method, which directly minimizes the empirical size of prediction sets on meta data, aiming at rectifying the training loss. Experiments on datasets with both synthetic and real-world noise demonstrate that the proposed method can effectively enhance the efficiency of the prediction sets against training label noise.
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 18090
Loading