ReasonBert: Pre-trained to Reason with Distant SupervisionDownload PDF

Anonymous

23 May 2021OpenReview Anonymous Preprint Blind SubmissionReaders: Everyone
Keywords: Question Answering, language model pre-training, table understanding
TL;DR: A new pre-training method focuses on the reasoning ability of the model and achieves significant improvement on question answering datasets ranging from single-hop to multi-hop and from text-only to table-only to hybrid.
Abstract: We present ReasonBERT, a pre-training method that augments language models with the ability to reason over long-range relations and multiple, possibly hybrid contexts. Unlike existing pre-training methods that only harvest learning signals from local contexts of naturally occurring texts, we propose a generalized notion of distant supervision to automatically connect multiple pieces of text and tables to create pre-training examples that require long-range reasoning. Different types of reasoning are simulated, including intersecting multiple pieces of evidence, bridging from one piece of evidence to another, and detecting unanswerable cases. We conduct a comprehensive evaluation on a variety of extractive question answering datasets ranging from single-hop to multi-hop and from text-only to table-only to hybrid that require various reasoning capabilities and show that ReasonBERT achieves remarkable improvement over an array of strong baselines. Few-shot experiments further demonstrate that our pre-training method substantially improves sample efficiency. The pre-trained model is available in Huggingface https://huggingface.co/Anonymous
0 Replies

Loading