LegalGPT: Legal Chain of Thought for the Legal Large Language Model Multi-agent Framework

Published: 01 Jan 2024, Last Modified: 08 Feb 2025ICIC (LNAI 6) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: This paper delves into the application of Large Language Models (LLMs) in the field of legal task processing with a specific focus on the emerging capabilities these models have demonstrated in complex reasoning and zero-shot learning (ZSL). By introducing a multi-intelligence framework based on Large Scale Legal Language Modeling, the research aims to improve the efficiency and performance of the models across a wide range of functions including legal review, consultation, and judicial decision-making. The framework utilizes function-specific legal chains of thoughts and specialized agents to guide the LLMs to handle legal tasks, optimize response behaviors, and enhance reasoning capabilities. The study also incorporates an information retrieval module to address the common hallucination problem of LLMs, thereby improving response reliability and enhancing the model's ability to tackle complex legal issues. The evaluation of the LegalGPT model shows that it indeed outperforms the existing legal LLMs in terms of accuracy, completeness, and linguistic quality in several Chinese legal domains.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview