When Locally Deployable Small Models Can Substitute LLMs? An Empirical Study on Active Learning in Real-World Scenarios
Abstract: Large Language Models (LLMs) excel at diverse benchmarking tasks, yet face many deployment barriers in real-world scenarios, such as data privacy and computing resources. On the other hand, low-resource learning techniques like Active Learning (AL) can reduce the annotation cost for fine-tuning locally deployable small models. Subsequently, when and how those AL-assisted small models with low-resource expert annotations can substitute off-the-shelf generic LLMs in real-world scenarios is critical but being overlooked. This empirical study examines AL-assisted small models versus generic LLMs in five real-world tasks with expert annotations. Our AL simulation validates the significance of AL-assisted locally deployable small models as well as the importance of distinct AL sampling strategies in real-world scenarios. We further discuss a promising future paradigm that leverages LLMs to ``warm-up'' AL-assisted small models.
Paper Type: short
Research Area: Efficient/Low-Resource Methods for NLP
Contribution Types: NLP engineering experiment, Approaches to low-resource settings
Languages Studied: English
0 Replies
Loading