How can BERT Understand High-level Semantics?Download PDF

Published: 27 May 2021, Last Modified: 24 May 2023NSNLI OralReaders: Everyone
Keywords: NLP, LPLMs, Knowledge Bases, Wikidata, BERT
TL;DR: Should we fine-tune BERT towards more semantically based objectives
Abstract: Pre-trained language models (PTLMs), such as BERT, ELMO and XLNET, have yielded state-of-the-art performance on many natural language processing tasks. In this paper, we argue that, despite their popularity and their contextual nature, PTLMs are unable to correctly capture high-level semantics such as linking age to the date of birth and rich to net worth. These kind of semantically-based inferences are performed systematically by humans, which is why we would we assume that PTLMs with all their language capabilities are able to make similar predictions. We show in this position paper that PTLMs are really good at making predictions for taxonomic relationships, but fail at attribute-value semantics like in rich and net worth.
Track: Positional paper
3 Replies

Loading