Learning Symbolic Rules for Reasoning in Quasi-Natural Language
Abstract: Symbolic reasoning, rule-based symbol manipulation, is a hallmark of human intelligence. However, rule-based systems have had limited success competing with learning-based systems outside formalized domains such as automated theorem proving. We hypothesize that this is due to the manual construction of rules in past attempts. In this work, we take initial steps towards rule-based systems that can reason with natural language but without manually constructing rules. We propose MetaQNL, a "Quasi-Natural Language" that can express both formal logic and natural language sentences, and MetaInduce, a learning algorithm that induces MetaQNL rules from training data consisting of questions and answers, with or without intermediate reasoning steps. In addition, we introduce soft matching—a flexible mechanism for applying rules without rigid matching, overcoming a typical source of brittleness in symbolic reasoning. Our approach achieves state-of-the-art accuracies on multiple reasoning benchmarks; it learns compact models with much less data and produces not only answers but also checkable proofs. Further, experiments on two simple real-world datasets demonstrate the possibility for our method to handle noise and ambiguity.
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Incorporated reviewers' comments on additional literature (Sec. 2), comparisons (Sec. 6.2, page 11), and scalability (Sec. 4.3).
Supplementary Material: zip
Assigned Action Editor: ~Angeliki_Lazaridou2
Submission Number: 732