Teaching Probabilistic Logical Reasoning to TransformersDownload PDF

Anonymous

16 Oct 2023ACL ARR 2023 October Blind SubmissionReaders: Everyone
Abstract: In this paper, we evaluate the ability of transformer-based language models in reasoning over uncertain text that includes uncertain rules of reasoning. We cover pre-trained language models (PLMs) and the newer large language models (LLMs). Our evaluation results show that both generations of language models struggle with reasoning over uncertain text. We focus on PLMs and propose a novel Neuro-Symbolic fine-tuning approach, Probabilistic Constraint Training (PCT), incorporating probabilistic logical rules as constraints during fine-tuning. To assess the effectiveness of PCT, we utilize the related corpora and, additionally, create a new and more challenging benchmark that, unlike the previous ones, uses instance-specific rules. Our study demonstrates the potential of PCT, the pioneer method that improves the transformer-based language model's accuracy and explainability of the probabilistic logical reasoning process. Furthermore, PCT equips these models to effectively handle novel situations, including higher reasoning depth, new domains, and complex probabilistic structures.
Paper Type: long
Research Area: Question Answering
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English,
Consent To Share Submission Details: On behalf of all authors, we agree to the terms above to share our submission details.
0 Replies

Loading