Learning to Abstain From Uninformative Data

Published: 06 Feb 2024, Last Modified: 06 Feb 2024Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Learning and decision-making in domains with naturally high noise-to-signal ratios – such as Finance or Healthcare – is often challenging, while the stakes are very high. In this paper, we study the problem of learning and acting under a general noisy generative process. In this problem, the data distribution has a significant proportion of uninformative samples with high noise in the label, while part of the data contains useful information represented by low label noise. This dichotomy is present during both training and inference, which requires the proper handling of uninformative data during both training and testing. We propose a novel approach to learning under these conditions via a loss inspired by the selective learning theory. By minimizing this loss, the model is guaranteed to make a near-optimal decision by distinguishing informative data from uninformative data and making predictions. We build upon the strength of our theoretical guarantees by describing an iterative algorithm, which jointly optimizes both a predictor and a selector, and evaluates its empirical performance in a variety of settings.
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: 1) Based on Editor's comment, we have change the colored (blue) text in the revised manuscript to black. 2) We have also added the answer of A6 and A7 into the future work section, as suggested by Reviewer VdEU. 3) We have included a github link for releasing the code for reproducing the experiments. 4) We have included ackowledgemement to thank who contributed to this paper.
Code: https://github.com/morganstanley/MSML/tree/main/paper/Learn_to_Abstain
Assigned Action Editor: ~Varun_Kanade1
Submission Number: 1562