Text Classification in the LLM Era - Where do we stand?

ACL ARR 2024 December Submission379 Authors

13 Dec 2024 (modified: 19 Feb 2025)ACL ARR 2024 December SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large Language Models revolutionized NLP and showed dramatic performance improvements across several tasks. In this paper, we investigate the role of such language models in text classification and how they compare with other approaches relying on smaller pre-trained language models. Considering 32 datasets spanning 8 languages, we compare zero-shot classification, few-shot fine-tuning and synthetic data based classifiers with classifiers built using the complete human labeled dataset. Our results show that zero-shot approaches do well for sentiment classification, but are outperformed by other approaches for the rest of the tasks, and synthetic data sourced from multiple LLMs can build better classifiers than zero-shot open LLMs. We also see wide performance disparities across languages in all the classification scenarios. We expect that these findings would guide practitioners working on developing text classification systems across languages.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: Text Classification, Synthetic Data Generation, Multilingual Evaluation
Contribution Types: NLP engineering experiment
Languages Studied: Arabic, English, French, German, Hindi, Italian, Portuguese, Spanish
Submission Number: 379
Loading