Premises Reordering in Forward Chaining Improves LLM Symbolic Reasoning
Track: long paper (up to 10 pages)
Keywords: LLMs, logical reasoning, premises order
Abstract: Large Language Models (LLMs) have shown outstanding performance on diverse natural language processing tasks, but they still struggle with complex logical reasoning, limiting its real-world applicability. While previous neuro-symbolic approaches for improving LLMs' performance on logical question-answering (QA) primarily focus on either translation quality or reasoning process, they largely overlook LLMs' performance significantly sensitive to the \textbf{orders} of relevant information (also known as \textbf{premises} in logical QA tasks) in the input contexts.
Motivated by such observations, we propose a method to first reorder the logical QA premises to align with the premises orders in forward chaining proof to improve LLM logical reasoning. We then use the LLM to translate both premises and the question to the symbolic language, and perform symbolic reasoning via an external logic solver using the translated symbolic language.
In this way, both the translation and reasoning accuracy are enhanced, due to the forward chaining premises benefits (i) sequentially formulating the objects and (ii) performing symbolic reasoning during the premises searching in solver. Empirical experiments across three benchmarks demonstrate that our premises reordering method stably outperforms neuro-symbolic baselines, including both symbolic solver-based and prompt-based methods.
Presenter: ~Xin_Zhang91
Format: Yes, the presenting author will attend in person if this work is accepted to the workshop.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Funding: No, the presenting author of this submission does *not* fall under ICLR’s funding aims, or has sufficient alternate funding.
Submission Number: 132
Loading