Using Language Models on Low-end HardwareDownload PDF

Anonymous

17 Jul 2021 (modified: 05 May 2023)ACL ARR 2021 July Blind SubmissionReaders: Everyone
Abstract: This paper evaluates the viability of using fixed language models for training text classification networks on low-end hardware. We combine language models with a CNN architecture and put together a comprehensive benchmark with 8 datasets covering single and multi-label classification of topic, sentiment, and genre. Our observations are distilled into a list of trade-offs, concluding that there are scenarios, where not fine-tuning a language model yields competitive effectiveness at faster training, requiring only a quarter of the memory compared to fine-tuning.
0 Replies

Loading