Abstract: We propose SALSA (Single-pass Autoregressive LLM Structured Classification), a method that harnesses the transferred knowledge of open-ended generative Large Language Models (LLMs) for text classification. By structuring task prompts and response formats while analyzing only the relevant target logits, SALSA enables computationally efficient classification with the generation of a single token only. We demonstrate that fine-tuning LLMs using Low-Rank Adaptation (LoRA) using SALSA’s approach, achieves state-of-the-art results on selected classification benchmarks. Not only does SALSA improve results, but it also achieves top-rated results faster than existing methods.
Paper Type: Short
Research Area: Machine Learning for NLP
Research Area Keywords: fine-tuning; prompting; generative models; transfer learning / domain adaptation; few-shot learning;
Contribution Types: NLP engineering experiment, Publicly available software and/or pre-trained models
Languages Studied: English
Submission Number: 668
Loading