The Interplay of Uncertainty Modeling and Deep Active Learning: An Empirical Analysis in Image Classification

Published: 12 May 2024, Last Modified: 12 May 2024Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Deep active learning (AL) seeks to reduce the annotation costs required for training deep neural networks (DNNs). Often, deep AL strategies focus on instances where the predictive uncertainty of a DNN is high. Furthermore, Bayesian concepts to model uncertainty are frequently adopted. Despite considerable research, a detailed analysis of the role of uncertainty in deep AL is still missing, especially regarding aleatoric and epistemic uncertainty, both related to predictive uncertainty. This article provides an in-depth empirical study analyzing the interplay of uncertainty and deep AL in image classification. Our study investigates four hypotheses that provide an intuitive understanding of the effects of accurately estimating aleatoric and epistemic uncertainty on existing uncertainty-based AL strategies but also, in the opposite direction, the impact of uncertainty-based AL on the quality of uncertainty estimates that are needed in many applications. By analyzing these hypotheses on synthetic and real-world data, we find that accurate aleatoric estimates can even impair instance selection, while accurate epistemic estimates have negligible effects. Moreover, we provide a publicly available toolbox for deep AL with various models and strategies to facilitate further research and practical applications. Code is available at github.com/anonymous.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: For the camera-ready version, we've made the following changes: - deanonymization of authors, - deanonymization of the link to the associated GitHub repository, - minor textual adjustments (e.g., spelling, punctuation).
Assigned Action Editor: ~Neil_Houlsby1
Submission Number: 1772
Loading