JurisAgents: A Multi-Agent Framework for Legal Judgment Prediction

ACL ARR 2025 May Submission8093 Authors

20 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Legal Judgment Prediction (LJP) aims to predict the outcomes of legal cases based on factual descriptions, serving as a fundamental task to advance the development of legal systems. Traditional approaches to LJP often rely on statistical analyses of past legal judgments or propose an agent-based framework from a role-playing perspective. However, the existing framework struggles to handle multiple allegations and diverse forms of evidence. Additionally, simplistic courtroom simulations often lead to one-sided decisions and insufficient adaptability. In this paper, we introduce $\textit{\textbf{JurisAgents}}$, a novel framework for LJP that effectively decomposes trial tasks, standardizes processes, and organizes them into distinct stages. Furthermore, considering the dynamic nature and real-time updates of legal statutes, we propose $\textit{\textbf{JurisMM}}$. It comprises more than 50,000 recent legal case records derived from Chinese judicial proceedings. It includes both unimodal textual data and multimodal data that combine video and text, allowing for a comprehensive examination of the capabilities of our framework. We validate the capability of our framework on both $\textit{JurisMM}$ and the widely used legal benchmark LawBench, innovatively explore the impact of multimodal data, and achieve state-of-the-art results in multiple designed experiments. These results indicate that our framework is effective not only for LJP but also for a broader range of legal applications, offering new perspectives for the development of future legal methods and datasets.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: Legal NLP
Contribution Types: NLP engineering experiment
Languages Studied: English, Chinese
Submission Number: 8093
Loading