Fidelity-Constrained Decoding for Legal Tasks in Large Language Models without Finetuning

ACL ARR 2025 May Submission412 Authors

12 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large language models (LLMs) for legal tasks are designed to assist judges and lawyers in decision-making, where ensuring fidelity to case facts and legal elements is crucial for generating reliable legal interpretations and accurate predictions. However, existing methods, including prompt-based and fine-tuning approaches, either require extensive human effort or lack an explicit mechanism to enforce fidelity in model outputs. To address these challenges, we propose Fidelity-Constrained Decoding (FCD), a tuning-free framework that constrains the decoding process to maintain strict alignment with case facts and legal elements. Extensive experiments on three datasets using two open-domain LLMs show that FCD consistently enhances legal performance.
Paper Type: Short
Research Area: NLP Applications
Research Area Keywords: legal NLP
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Data analysis
Languages Studied: Chinese
Keywords: legal NLP, constraint decoding, fidelity
Submission Number: 412
Loading