Can Pre-trained Language Models Interpret Similes as Smart as Human?Download PDF

Anonymous

16 Nov 2021 (modified: 05 May 2023)ACL ARR 2021 November Blind SubmissionReaders: Everyone
Abstract: Simile interpretation is a crucial task in natural language processing. Nowadays, pre-trained language models (PLMs) have achieved state-of-the-art performance on many tasks. However, it remains under-explored whether PLMs can interpret similes or not. In this paper, we investigate the ability of PLMs in simile interpretation by designing a novel task named Simile Property Probing, i.e., to let the PLMs infer the shared properties of similes. We construct our simile property probing datasets from both general textual corpus and human-designed questions, which contain a total of 1,633 examples covering seven main categories. Our empirical study based on the constructed datasets shows that PLMs exhibit the ability to infer shared properties of similes, while they still underperform humans. To bridge the gap with human performance, we additionally design a knowledge-enhanced training objective by incorporating the simile knowledge into PLMs via knowledge embedding methods. Our method brings up to an 8.58% gain in the probing task, and up to a 1.37% gain in the downstream task of sentiment classification. The datasets and code will be publicly available soon.
0 Replies

Loading