The Finer They Get: Combining Fine-Tuned Models For Better Semantic Change DetectionDownload PDF

Published: 20 Mar 2023, Last Modified: 18 Apr 2023NoDaLiDa 2023Readers: Everyone
Keywords: lexical semantic change, fine-tuning contextual models, evaluation
TL;DR: In this paper, we investigate our hypothesis that adding linguistic information to pre-trained language models by means of fine-tuning can lead to improved performance on unsupervised Lexical Semantic Change detection.
Abstract: In this work we investigate the hypothesis that enriching contextualized models using fine-tuning tasks can improve their capacity to detect lexical semantic change (LSC). We include tasks aimed to capture both low-level linguistic information like part-of-speech tagging, as well as higher level (semantic) information. Through a series of analyses we demonstrate that certain combinations of fine-tuning tasks, like sentiment, syntactic information, and logical inference, bring large improvements to standard LSC models that are based only on standard language modeling. We test on the binary classification and ranking tasks of SemEval-2020 Task 1 and evaluate using both permutation tests and under transfer-learning scenarios.
Student Paper: Yes, the first author is a student
3 Replies

Loading