Enhancing Logical Reasoning of Large Language Models via Phased Fine-Tuning

20 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large language model, Logical reasoning, Fine-tuning
Abstract: Large language models (LLMs) have not only achieved impressive progress in natural language processing tasks but also demonstrated remarkable performance in practical applications such as intelligent customer service. However, LLMs continue to demonstrate shortcomings in complex logical reasoning and decision-making capabilities. As one of the key elements in human intelligence, logical reasoning plays a crucial role in various tasks, including natural language understanding, intelligent question-answering, and knowledge graph construction. The deficiency of LLMs in logical reasoning significantly limits their applications, especially in domains requiring high accuracy and trustworthiness. To tackle this issue, we focus on propositional logic and introduce a logic QA-specific phased fine-tuning method to enhance the logical reasoning capabilities of LLMs, performing supervised fine-tuning from easy to hard. Based on the symbolic logical form complexity derived from disjunctive normal form (DNF) and the LLM logical reasoning complexity in propositional logic question-answering tasks, the difficulty of logic question-answering samples is automatically computed, and the training data is stratified based on the difficulty. A dedicated fine-tuning dataset for propositional logic is constructed, and experiments demonstrate the significant effectiveness of our method, especially in those tasks demanding strong logical reasoning ability.
Primary Area: neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)
Submission Number: 23017
Loading