[Regular] RuleSum: Injecting Rulesets into Knowledge Graphs for Accurate and Accessible Legal Summarization

Published: 08 Nov 2025, Last Modified: 08 Nov 2025NeurIPS 2025 Workshop NORA PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: large language models, structured prompting, knowledge graphs, educational NLP, evaluation frameworks
Abstract: Legal texts are often complex and inaccessible, limiting understanding for non-experts. Large language models (LLMs) can summarize such material but frequently sacrifice interpretability and factual accuracy. Thus, we present RuleSum, a framework that integrates structured rulesets and knowledge graphs (KGs) with LLMs to generate legal summaries that are faithful, readable, and pedagogically aligned. Leveraging the IRAC method (Issue, Rule, Application, Conclusion) as a reasoning scaffold, RuleSum applies structured representations—free-form, tuple-style (KAPING), and IRAC-labeled serialization—to guide summarization. We provide a framework for evaluation along multiple axes, including semantic consistency and readability. We evaluate our approach on the MultiLexSum dataset, using ROUGE-L for lexical overlap with reference summaries, SBERT for semantic similarity, and Flesch–Kincaid Grade Level (FKGL) for readability. The IRAC-guided KAPING-IRAC configuration consistently outperforms all baselines, achieving the highest alignment with reference summaries while maintaining accessibility for general audiences. Finally, we provide an interactive Gradio-based demo and open-source code that visualizes how each pipeline stage improves clarity and factual grounding, supporting future applications of structured reasoning for education and decision-making.
Submission Number: 32
Loading