Improving word mover's distance by leveraging self-attention matrix

Published: 07 Oct 2023, Last Modified: 01 Dec 2023EMNLP 2023 FindingsEveryoneRevisionsBibTeX
Submission Type: Regular Long Paper
Submission Track: Efficient Methods for NLP
Submission Track 2: Syntax, Parsing and their Applications
Keywords: word embeddings, word mover's distance, optimal transport, Gromov-Wasserstein distance, Fused Gromov-Wasserstein distance, Self-Attention, paraphrase identification, semantic textual similarity
TL;DR: By incorporating BERT's self-attention matrix (SAM) and the Fused Gromov-Wasserstein distance, the proposed method improves WMD by considering both word embeddings and sentence structure.
Abstract: Measuring the semantic similarity between two sentences is still an important task. The word mover's distance (WMD) computes the similarity via the optimal alignment between the sets of word embeddings. However, WMD does not utilize word order, making it challenging to distinguish sentences with significant overlaps of similar words, even if they are semantically very different. Here, we attempt to improve WMD by incorporating the sentence structure represented by BERT's self-attention matrix (SAM). The proposed method is based on the Fused Gromov-Wasserstein distance, which simultaneously considers the similarity of the word embedding and the SAM for calculating the optimal transport between two sentences. Experiments demonstrate the proposed method enhances WMD and its variants in paraphrase identification with near-equivalent performance in semantic textual similarity.
Submission Number: 902
Loading