Feature Alignment: Rethinking Efficient Active Learning via Proxy in the Context of Pre-trained Models
Abstract: Fine-tuning the pre-trained model with active learning holds promise for reducing annotation costs. However, this combination introduces significant computational costs, particularly with the growing scale of pre-trained models. Recent research has proposed proxy-based active learning, which pre-computes features to reduce computational costs. Yet, this approach often incurs a significant loss in active learning performance, sometimes outweighing the computational cost savings. This paper demonstrates that not all sample selection differences result in performance degradation. Furthermore, we show that suitable training methods can mitigate the decline of active learning performance caused by certain selection discrepancies. Building upon detailed analysis, we propose a novel method, aligned selection via proxy, which improves proxy-based active learning performance by updating pre-computed features and selecting a proper training method. Extensive experiments validate that our method improves the total cost of efficient active learning while maintaining computational efficiency.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: 1. Revised Figure 2 to make the training cost, labeling cost, and sampling time for Standard AL, SVPp, and ASVP (ours) clear.
2. Included explanations in the paper (section 8) on why experiments are not evaluated on larger models.
3. Extended experiments on the remaining two datasets to approach the expected performance upper bound and updated Figure 11, Table 1, Figure 12, Figure 13, and related appendix results with results of extended experiments. And updated figure 11 has included a line to show the upper bound (training with full dataset).
4. Camera-ready version: deanonymized and included the code link.
Code: https://github.com/ZiTingW/asvp
Supplementary Material: zip
Assigned Action Editor: ~Chicheng_Zhang1
Submission Number: 2870
Loading