Enhancing Unsupervised Pretraining with External Knowledge for Natural Language InferenceOpen Website

2019 (modified: 19 Dec 2021)Canadian Conference on AI 2019Readers: Everyone
Abstract: Unsupervised pretraining such as BERT (Bidirectional Encoder Representations from Transformers) [2] represents the most recent advance on learning representation for natural language, which has helped achieve leading performance on many natural language processing problems. Although BERT can leverage large corpora, we assume it cannot learn all needed semantics and knowledge for natural language inference (NLI). In this paper, we leverage human-authorized external knowledge to further improve BERT, and our results show that BERT, the current state-of-the-art pretraining framework, can benefit from external knowledge.
0 Replies

Loading