Catastrophic Failures of Neural Active Learning on Heteroskedastic DistributionsDownload PDF

Published: 02 Dec 2021, Last Modified: 05 May 2023NeurIPS 2021 Workshop DistShift PosterReaders: Everyone
Keywords: active learning, neural active learning, noisy data, badge
TL;DR: Active Learning algorithms can be prone to over-sample examples in the data which have noisy labels, even if the examples have no informative structure.
Abstract: Models which can actively seek out the best quality training data hold the promise of more accurate, adaptable, and efficient machine learning. State-of-the-art techniques tend to prefer examples which are the most difficult to classify. While this works well on homogeneous datasets, we find that it can lead to catastrophic failures when performing active learning on multiple distributions which have different degrees of label noise (heteroskedasticity). Most active learning algorithms strongly prefer to draw from the distribution with more noise, even if its examples have no informative structure (such as solid color images). We find that active learning which encourages diversity and model uncertainty in the selected examples can significantly mitigate these failures. We hope these observations are immediately useful to practitioners and can lead to the construction of more realistic and challenging active learning benchmarks.
1 Reply

Loading