Abstract: The task of Legal Judgment Prediction (LJP) involves predicting court decisions based on the facts of the case, including identifying the applicable law article, the charge, and the term of penalty. While neural methods have made significant strides in this area, they often fail to fully harness the rich semantic potential of language models (LMs). Prompt learning, a novel approach that reformulates downstream tasks as cloze-style or prefix-style prediction challenges for Masked Language Models using specialized prompt templates, has shown considerable promise across various Natural Language Processing (NLP) domains. However, the dynamic word lengths typical in LJP labels present a challenge to the standard prompt templates designed for single-word [MASK] tokens commonly used in many NLP tasks. To address this gap, we introduce the Prompt4LJP framework, a pioneering method tailored to incorporate the knowledge of LMs into the LJP task by effectively accommodating dynamic word lengths in labels. This framework leverages a dual-slot prompt template and correlation scoring to maximize the utility of LMs without requiring additional resources or complex tokenization schemes. Our method significantly outperforms current state-of-the-art techniques on the CAIL-2018 dataset, thereby enhancing the accuracy and reliability of LJP. This contribution not only advances the field of LJP but also demonstrates a novel application of prompt learning to complex tasks involving dynamic word lengths.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: legal NLP
Contribution Types: NLP engineering experiment
Languages Studied: Chinese
Submission Number: 2853
Loading