Abstract: This paper delves into the application of Large Language Models (LLMs) in the field of legal task processing with a specific focus on the emerging capabilities these models have demonstrated in complex reasoning and zero-shot learning (ZSL). By introducing a multi-intelligence framework based on Large Scale Legal Language Modeling, the research aims to improve the efficiency and performance of the models across a wide range of functions including legal review, consultation, and judicial decision-making. The framework utilizes function-specific legal chains of thoughts and specialized agents to guide the LLMs to handle legal tasks, optimize response behaviors, and enhance reasoning capabilities. The study also incorporates an information retrieval module to address the common hallucination problem of LLMs, thereby improving response reliability and enhancing the model's ability to tackle complex legal issues. The evaluation of the LegalGPT model shows that it indeed outperforms the existing legal LLMs in terms of accuracy, completeness, and linguistic quality in several Chinese legal domains.
Loading