Pretrained Encyclopedia: Weakly Supervised Knowledge-Pretrained Language ModelDownload PDF

Published: 20 Dec 2019, Last Modified: 05 May 2023ICLR 2020 Conference Blind SubmissionReaders: Everyone
Abstract: Recent breakthroughs of pretrained language models have shown the effectiveness of self-supervised learning for a wide range of natural language processing (NLP) tasks. In addition to standard syntactic and semantic NLP tasks, pretrained models achieve strong improvements on tasks that involve real-world knowledge, suggesting that large-scale language modeling could be an implicit method to capture knowledge. In this work, we further investigate the extent to which pretrained models such as BERT capture knowledge using a zero-shot fact completion task. Moreover, we propose a simple yet effective weakly supervised pretraining objective, which explicitly forces the model to incorporate knowledge about real-world entities. Models trained with our new objective yield significant improvements on the fact completion task. When applied to downstream tasks, our model consistently outperforms BERT on four entity-related question answering datasets (i.e., WebQuestions, TriviaQA, SearchQA and Quasar-T) with an average 2.7 F1 improvements and a standard fine-grained entity typing dataset (i.e., FIGER) with 5.7 accuracy gains.
Data: [FIGER](https://paperswithcode.com/dataset/figer), [QUASAR](https://paperswithcode.com/dataset/quasar-1), [QUASAR-T](https://paperswithcode.com/dataset/quasar-t), [SearchQA](https://paperswithcode.com/dataset/searchqa), [TriviaQA](https://paperswithcode.com/dataset/triviaqa), [WebQuestions](https://paperswithcode.com/dataset/webquestions)
Original Pdf: pdf
9 Replies

Loading