Deep Active Learning with Noise StabilityDownload PDF

Published: 28 Jan 2022, Last Modified: 22 Oct 2023ICLR 2022 SubmittedReaders: Everyone
Keywords: deep learning, active learning, noise stability
Abstract: Uncertainty estimation for unlabeled data is crucial to active learning. With a deep neural network employed as the backbone model, the data selection process is highly challenged due to the potential over-confidence of the model inference. Existing methods usually resort to multi-pass model training or adversarial training to handle this challenge, resulting in complex and inefficient pipelines, which prevent the deployment in practice. To address such an issue, in this work we propose a novel Single-Training Multi-Inference algorithm that leverages noise stability to estimate data uncertainty. Specifically, it is measured by to what degree the output deviates from the original observation when the model parameters are randomly perturbed by noise. We provide theoretical analysis of using small Gaussian noise, showing that our method has a solid connection with the classical theory of variance reduction, i.e. labelling a data sample of higher uncertainty, indicated by the inverse noise stability, contributes more to reducing the variance of existing training samples. Despite its simplicity and efficiency, our method outperforms the state-of-the-art active learning baselines in image classification and semantic segmentation tasks.
One-sentence Summary: This paper presents a novel active learning algorithm that leverages noise stability to estimate data uncertainty.
Supplementary Material: zip
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/arxiv:2205.13340/code)
11 Replies

Loading