Enhancing LLMs in Legal Judgment Prediction via Neuro-Symbolic Reasoning

Published: 05 Mar 2026, Last Modified: 05 Mar 2026ICLR 2026 Workshop LLM ReasoningEveryoneRevisionsBibTeXCC BY 4.0
Track: long paper (up to 10 pages)
Keywords: LLMs, Legal Judgment Prediction, Logical Reasoning
Abstract: Large Language Models (LLMs) often struggle with Legal Judgment Prediction (LJP) tasks, failing to maintain the rigorous logical consistency required for legal judicial decision-making despite their outstanding semantic capabilities. Existing methods relying on LLMs' reasoning capabilities remain prone to instability and hallucinations, and thus there is the absence of logically reliable and explainable methods for LLMs in LJP task. To fill this gap, we propose a novel neuro-symbolic approach that integrates an external logical solver to determine whether the conduct in the case fact constitutes a violation of specific law articles. Specifically, our approach utilizes an LLM to translate texts into symbolic representations, performs reasoning via an external solver to determine the logical consistency between the articles and facts, and interprets the output to ensure the final answer is both logically accurate and contextually readable. Experiments demonstrate that our method significantly outperforms both general and law domain-specific reasoning baselines.
Presenter: ~Zhaozuo_Liu1
Format: Yes, the presenting author will attend in person if this work is accepted to the workshop.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Funding: No, the presenting author of this submission does *not* fall under ICLR’s funding aims, or has sufficient alternate funding.
Submission Number: 133
Loading