Abstract: Lexical substitution, a fundamental task in natural language processing, aims to replace target words with semantically equivalent or synonymous substitutes while preserving original sentence meaning. Although extensively explored, existing methods exhibit two major limitations: 1) inadequate investigation of embedding representations when target word contain subwords, and 2) excessive hyperparameters and computational complexity from multi-metric evaluation in candidate ranking. To address these issues, we propose LexSubDis, which constructs more MLM-compatible substitution mechanisms by averaging subword embeddings of target words and combining them with synonym embeddings. Moreover, we pioneer the introduction of discriminator models to assess semantic impacts of substitutions. Experimental results demonstrate that LexSubDis significantly reduces hyperparameters while achieving state-of-the-art performance under unsupervised learning on CoInCo dataset's $ootm$ metric, offering novel insights and solutions for lexical substitution research.
Paper Type: Long
Research Area: Semantics: Lexical and Sentence-Level
Research Area Keywords: lexical relationships, lexical semantic change, semantic textual similarity
Languages Studied: English
Submission Number: 410
Loading