Comparing Contextual and Static Word Embeddings with Small Philosophical DataDownload PDF

12 Oct 2022OpenReview Archive Direct UploadReaders: Everyone
Abstract: For domain-specific NLP tasks, applying word embeddings trained on general corpora is not optimal. Meanwhile, training domain-specific word representations poses challenges to dataset construction and embedding evaluation. In this paper, we present and compare ELMo and Word2Vec models trained/finetuned on philosophical data. For evaluation, a conceptual network was used. Results show that contextualized models provide better word embeddings than static models and that merging embeddings from different models boosts task performance
0 Replies

Loading