ABAS-RAL: Adaptive BAtch Size using Reinforced Active Learning

27 Sept 2024 (modified: 05 Feb 2025)Submitted to ICLR 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Active learning, Reinforcement Learning, Adaptive Batch Size, Annotation Budget
Abstract: Active learning reduces annotation costs by selecting the most informative samples, however fixed batch sizes used in traditional methods often lead to inefficient use of resources. We propose Adaptive BAtch Size using Reinforced Active Learning, a novel approach that dynamically adjusts batch sizes based on model uncertainty and performance. By framing the annotation process as a Markov Decision Process, the proposed method employs reinforcement learning to optimize batch size selection, using two distinct policies: one targeting precision and budget, and the other for adapting the batch size based on learning progress. The proposed method is evaluated on both CIFAR-10, CIFAR-100 and MNIST datasets. The performance is measured across multiple metrics, including precision, accuracy, recall, F1-score, and annotation budget. Experimental results demonstrate that the proposed method consistently reduces annotation costs while maintaining or improving performance compared to fixed-batch Active Learning methods, achieving higher sample selection efficiency without compromising model quality.
Primary Area: reinforcement learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 10010
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview