Abstract: Machine learning models excel with abundant annotated data, but annotation is often costly and time-intensive.
Active learning (AL) aims to improve the performance-to-annotation ratio by using query methods (QMs) to iteratively select the most informative samples.
While AL research focuses mainly on QM development, the evaluation of this iterative process lacks appropriate performance metrics.
This work reviews eight years of AL evaluation literature and formally introduces the speed-up factor, a quantitative multi-iteration QM performance metric that indicates the fraction of samples needed to match random sampling performance.
Using four datasets from diverse domains and seven QMs of various types, we empirically evaluate the speed-up factor and compare it with state-of-the-art AL performance metrics.
The results confirm the assumptions underlying the speed-up factor, demonstrate its accuracy in capturing the described fraction, and reveal its superior stability across iterations.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Kamalika_Chaudhuri1
Submission Number: 5568
Loading