Keywords: Neural Symbolic, Pretrained Language Model, Logical Reasoning
Abstract: Pre-trained large language models (LMs) struggle to perform logical reasoning reliably despite advances in scale and compositionality. In this work, we tackle this challenge through the lens of symbolic programming. We propose DSR-LM, a Differentiable Symbolic Reasoning framework where pre-trained LMs govern the perception of factual knowledge, and a symbolic module performs deductive reasoning. In contrast to works that rely on hand-crafted logic rules, our differentiable symbolic reasoning framework efficiently learns weighted rules to further improve LMs. DSR- LM is scalable, interpretable, and allows easy integration of prior knowledge, thereby supporting extensive symbolic programming to robustly derive a logical conclusion. Our experiments show that DSR-LM leads to the improved logical reasoning of pre-trained LMs, an over 10% accuracy gain, and outperforms a spectrum of competitive baselines under systematic distribution shifts on sequence lengths
Paper Type: long
Research Area: Machine Learning for NLP
0 Replies
Loading