Accelerating Batch Active Learning Using Continual Learning Techniques

Published: 02 Dec 2023, Last Modified: 02 Dec 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: A major problem with Active Learning (AL) is high training costs since models are typically retrained from scratch after every query round. We start by demonstrating that standard AL on neural networks with warm starting fails, both to accelerate training and to avoid catastrophic forgetting when using fine-tuning over AL query rounds. We then develop a new class of techniques, circumventing this problem, by biasing further training towards previously labeled sets. We accomplish this by employing existing, and developing novel, replay-based Continual Learning (CL) algorithms that are effective at quickly learning the new without forgetting the old, especially when data comes from an evolving distribution. We call this paradigm \textit{"Continual Active Learning" (CAL)}. We show CAL achieves significant speedups using a plethora of replay schemes that use model distillation and that select diverse/uncertain points from the history. We conduct experiments across many data domains, including natural language, vision, medical imaging, and computational biology, each with different neural architectures and dataset sizes. CAL consistently provides a $\sim$3x reduction in training time, while retaining performance and out-of-distribution robustness, showing its wide applicability.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Summary of changes: - Removed red text from revision - Discussion on choice of CL and an overview of CL in Table 1 - Added Figure 2 to illustrate distribution shift in AL - Added discussion of theoretical possibilities in the Future Work section
Supplementary Material: zip
Assigned Action Editor: ~changjian_shui1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 1522
Loading