Prompt4LJP: prompt learning for legal judgment prediction

Published: 01 Jan 2025, Last Modified: 14 May 2025J. Supercomput. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The task of legal judgment prediction (LJP) involves predicting court decisions based on the facts of the case, including identifying the applicable law article, the charge, and the term of penalty. While neural methods have made significant strides in this area, they often fail to fully harness the rich semantic potential of language models (LMs). Prompt learning is a novel paradigm in natural language processing (NLP) that reformulates downstream tasks into cloze-style or prefix-style prediction challenges by utilizing specialized prompt templates. This paradigm shows significant potential across various NLP domains, including short text classification. However, the dynamic word lengths of LJP labels present a challenge to the general prompt templates designed for single-word [MASK] tokens commonly used in many NLP tasks. To address this gap, we introduce the Prompt4LJP framework, a new method based on the prompt learning paradigm for the complex LJP task. Our framework employs a dual-slot prompt template in conjunction with a correlation scoring mechanism to maximize the utility of LMs without requiring additional resources or complex tokenization schemes. Specifically, the dual-slot template consists of two distinct slots: one dedicated to factual descriptions and the other to labels. This approach effectively tackles the challenge of dynamic word lengths in LJP labels, reformulating the LJP classification task as an evaluation of the applicability of each label. By incorporating a correlation scoring mechanism, we can identify the final result label. The experimental results show that our Prompt4LJP method, whether using discrete or continuous templates, outperforms baseline methods, particularly in charges and terms of penalty prediction. Compared to the best baseline model EPM, Prompt4LJP shows F1-score improvements of 2.25% and 4.76% (charge prediction and term of penalty prediction) with discrete templates, and 3.24% and 4.05% with the continuous template, demonstrating prompt4LJP ability to leverage pretrained knowledge and adapt flexibly to specific tasks. The source code can be obtained from https://github.com/huangqiongyannn/Prompt4LJP.
Loading