Improved Algorithm for Deep Active Learning under Imbalance via Optimal Separation

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Class imbalance severely impacts machine learning performance on minority classes in real-world applications. While various solutions exist, active learning offers a fundamental fix by strategically collecting balanced, informative labeled examples from abundant unlabeled data. We introduce DIRECT, an algorithm that identifies class separation boundaries and selects the most uncertain nearby examples for annotation. By reducing the problem to one-dimensional active learning, DIRECT leverages established theory to handle batch labeling and label noise -- another common challenge in data annotation that particularly affects active learning methods. Our work presents the first comprehensive study of active learning under both class imbalance and label noise. Extensive experiments on imbalanced datasets show DIRECT reduces annotation costs by over 60\% compared to state-of-the-art active learning methods and over 80\% versus random sampling, while maintaining robustness to label noise.
Lay Summary: Machine learning models learn by being shown labeled examples, but real-world data is rarely perfect. Some classes, like tumors in medical scans or endangered species in wildlife photos, appear far less frequently than others, and human-provided labels can be wrong due to fatigue or ambiguity. These issues—class imbalance and label noise—make it difficult and expensive to train accurate models. We introduce DIRECT, a new method that helps models learn more effectively from imperfect data. Instead of labeling examples randomly, DIRECT finds the borderline cases the model is most unsure about and focuses labeling efforts there—where new information is most helpful. To do this reliably, even with noisy labels, it breaks the overall problem into simpler, one-dimensional tasks that are easier to solve. DIRECT also supports practical workflows where multiple annotators label data in parallel, unlike many previous methods that assume labels come one at a time. In experiments across diverse datasets, it reduces labeling costs by over 60% compared to state-of-the-art methods, without sacrificing performance. This makes machine learning more efficient and accessible—especially in domains where clean, balanced data is hard to collect.
Primary Area: Deep Learning->Algorithms
Keywords: Deep Learning, Active Labeling, Class Imbalance
Submission Number: 13344
Loading