Neuro-Symbolic Integration Brings Causal and Reliable Reasoning ProofsDownload PDF

Anonymous

16 Dec 2023ACL ARR 2023 December Blind SubmissionReaders: Everyone
Abstract: Though prompting LLMs with various reasoning structures produces intermediate reasoning steps along with answers, these steps are not ensured to be causal and reliable due to the inherent defects of LLMs. Tracking such deficiencies, we present a neuro-symbolic integration framework, in which a neural LLM represents the knowledge of the problem while an LLM-free symbolic solver is adopted to do deliberate reasoning using the knowledge. Specifically, customized meta-interpreters are implemented to generate intermediate reasoning proofs and to support various search strategies. These reasoning proofs are ensured to be causal and reliable because of the deterministic executing nature of the symbolic solvers. We conduct experiments on two logical reasoning datasets and one arithmetic reasoning dataset. On ProofWriter, our method surpasses the CoT baseline by nearly double in reasoning accuracy and more than triple in reasoning proof similarity. On GSM8K, our method also shows accuracy improvements and nearly doubled proof similarity.
Paper Type: long
Research Area: Question Answering
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Publicly available software and/or pre-trained models
Languages Studied: English
0 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview