Can BERT Conduct Logical Reasoning? On the Difficulty of Learning to Reason from DataDownload PDF

Anonymous

16 Jan 2022 (modified: 05 May 2023)ACL ARR 2022 January Blind SubmissionReaders: Everyone
Abstract: Logical reasoning is needed in a wide range of NLP tasks. In this work, we seek to answer one research question: can we train a BERT model to solve logical reasoning problems written in natural language? We study this problem on a confined problem space and train a BERT model on randomly drawn data. However, we report a rather surprising finding: even if BERT achieves nearly perfect accuracy on the test data, it only learns an incorrect and partial reasoning function; further investigation shows that the behaviour of the model (i.e., the learned partial reasoning function) is unreasonably sensitive to the training data. Our work reveals the difficulty of learning to reason from data and shows that near-perfect performance on randomly drawn data is not a sufficient indicator of models' ability to conduct logical reasoning.
Paper Type: long
0 Replies

Loading