Deep Batch Active Learning by Diverse, Uncertain Gradient Lower BoundsDownload PDF

25 Sept 2019, 19:24 (edited 19 Feb 2022)ICLR 2020 Conference Blind SubmissionReaders: Everyone
  • Original Pdf: pdf
  • Code: [![github](/images/github_icon.svg) JordanAsh/badge](https://github.com/JordanAsh/badge) + [![Papers with Code](/images/pwc_icon.svg) 2 community implementations](https://paperswithcode.com/paper/?openreview=ryghZJBKPS)
  • Data: [CIFAR-10](https://paperswithcode.com/dataset/cifar-10), [MNIST](https://paperswithcode.com/dataset/mnist), [SVHN](https://paperswithcode.com/dataset/svhn)
  • TL;DR: We introduce a new batch active learning algorithm that's robust to model architecture, batch size, and dataset.
  • Abstract: We design a new algorithm for batch active learning with deep neural network models. Our algorithm, Batch Active learning by Diverse Gradient Embeddings (BADGE), samples groups of points that are disparate and high-magnitude when represented in a hallucinated gradient space, a strategy designed to incorporate both predictive uncertainty and sample diversity into every selected batch. Crucially, BADGE trades off between diversity and uncertainty without requiring any hand-tuned hyperparameters. While other approaches sometimes succeed for particular batch sizes or architectures, BADGE consistently performs as well or better, making it a useful option for real world active learning problems.
  • Keywords: deep learning, active learning, batch active learning
8 Replies

Loading