An Empirical Study of the Downstream Reliability of Pre-Trained Word EmbeddingsDownload PDF

Anthony Rios, Brandon Lwowski

03 Jul 2020 (modified: 30 Sept 2020)OpenReview Anonymous Preprint Blind SubmissionReaders: Everyone
Keywords: NLP, word embeddings, offensive language, reliability, fairness
TL;DR: Static embeddings result in reliable downstream consistency of word embeddings
Abstract: While pre-trained word embeddings have been shown to improve the performance of downstream tasks, many questions remain regarding their reliability: Do the same pre-trained word embeddings result in the best performance with slight changes to the training data? Do the same pre-trained embeddings perform well with multiple neural network architectures? What is the relation between downstream fairness of different architectures and pre-trained embeddings? In this paper, we introduce two new metrics to understand the downstream reliability of word embeddings. We find that downstream reliability of word embeddings depends on multiple factors, including, the handling of out-of-vocabulary words and whether the embeddings are fine-tuned.
0 Replies

Loading