A comparative study of prompting strategies for legal text classification

Published: 01 Dec 2023, Last Modified: 07 May 2026Proceedings of the Natural Legal Language Processing Workshop 2023EveryoneCC BY 4.0
Abstract: In this study, we explore the performance of large language models (LLMs) using differ-ent prompt engineering approaches in the context of legal text classification. Prior research has demonstrated that various prompting techniques can improve the performance of a diverse array of tasks done by LLMs. However, in this research, we observe that professional documents, and in particular legal documents, pose unique challenges for LLMs. We experiment with several LLMs and various prompting techniques, including zero/few-shot prompting, prompt ensembling, chain-of-thought, and activation fine-tuning and compare the performance on legal datasets. Although the new generation of LLMs and prompt optimization techniques have been shown to improve generation and understanding of generic tasks, our findings suggest that such improvements maynot readily transfer to other domains. Specifically, experiments indicate that not all prompt-ing approaches and models are well-suited forthe legal domain which involves complexities such as long documents and domain-specific language.
Loading