CausalR: Causal Reasoning over Natural Language RulebasesDownload PDF

Anonymous

16 Nov 2021 (modified: 05 May 2023)ACL ARR 2021 November Blind SubmissionReaders: Everyone
Abstract: Transformers have been shown to perform deductive reasoning on a logical rulebase containing rules and statements written in natural language. Recent works show that such models can also produce the reasoning steps (i.e., the proof graph) that emulate the model’s logical reasoning process. But these models behave as a black-box unit that emulates the reasoning process without any causal constraints in the reasoning steps, thus questioning the faithfulness. In this work, we frame the deductive logical reasoning task as a causal process by defining three modular components: rule selection, fact selection, and knowledge composition. The rule and fact selection steps select the candidate rule and facts to be used and then the knowledge composition combines them to generate new inferences. This ensures model faithfulness by assured causal relation from the proof step to the inference reasoning. To test our causal reasoning framework, we propose CausalR, where the above three components are independently modeled by transformers. We observe that CausalR is robust to novel language perturbations, and performs on par with previous works on existing reasoning datasets. Furthermore, the errors made by CausalR are more interpretable due to our multi-modular approach compared to black-box generative models.
0 Replies

Loading