Towards Expert Legal LLM Responses: Logical Structure and Semantic Information Integration

ACL ARR 2024 June Submission5670 Authors

16 Jun 2024 (modified: 22 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large language models (LLMs) have demonstrated excellent performance across various fields. Nevertheless, they exhibit notable deficiencies when addressing legal questions. In the legal field, LLMs often provide generalized responses, lacking the necessary specificity for expert legal advice. Additionally, they tend to provide answers that appear correct but are unreliable due to issues with hallucination. Retrieval-Augmented Generation (RAG) is a popular approach to addressing these issues. However, existing methods often focus solely on semantic-level similarity, neglecting the logical structure relationships between different legal questions. In this paper, we propose a Logical-Semantic Integration Model (LSIM), which consists of three components. First, reinforcement learning is used to predict the fact-rule chain of thought for the given question. Secondly, the DSSM model that integrates logical structure and semantic information is used to retrieve the most relevant candidate questions from the database. Finally, in-context learning is used to generate the final answer. Experiments on a real-world legal QA dataset, using both automated evaluation metrics and human evaluation, demonstrate the effectiveness of the proposed method. The dataset will be released to the community to promote the development of the legal QA field。
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: legal NLP
Contribution Types: Model analysis & interpretability, Data resources, Data analysis
Languages Studied: English
Submission Number: 5670
Loading