JudiAgents Framework: A Judicial Decision-Making Simulation Framework Integrating Diverse Agent Configurations and Deliberation Processes
Abstract: Legal Artificial Intelligence has made significant strides using Large Language Models (LLMs) for tasks like judgment prediction. Evolving from traditional simple text classification to utilizing models for direct prediction, the trend is gradually shifting towards building agents that simulate judicial processes. However, most existing efforts are often confined to simulating single, localized judicial steps, lacking depth and multi-perspective evaluation. Therefore, we present the JudiAgents Framework, a multi-agent framework designed to deeply simulate the entire judicial decision-making process. This framework covers agent building, courtroom debate, jury discussion and deliberation, as well as the prediction of judgment results and basis, forming a complete, realistic, end-to-end judicial simulation process. We conduct experiments on the datasets from China Judgments Online, covering from various real cases of different types, such as civil, criminal, first instance, and second instance. The results show that JudiAgents outperforms baseline models in predicting judgment outcomes and generating legal bases.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: legal NLP, judicial decision-making, multi-agent systems, simulation, deliberation process, judgment prediction, agent-based modeling, LLM/AI agents,human behavior analysis
Contribution Types: NLP engineering experiment, Data analysis
Languages Studied: Chinese, English
Keywords: legal NLP, judicial decision-making, multi-agent systems, simulation, deliberation process, judgment prediction, agent-based modeling, LLM/AI agents,human behavior analysis
Submission Number: 1646
Loading