MORTY Embedding: Improved Embeddings without SupervisionDownload PDF

Anonymous

24 Jan 2019 (modified: 28 Jun 2019)OpenReview Anonymous Preprint Blind SubmissionReaders: Everyone
Abstract: We demonstrate a low effort method that unsupervisedly constructs task-optimized embeddings from existing word embeddings to gain performance on a supervised end-task. This avoids additional labeling or building more complex model architectures by instead providing specialized embeddings better fit for the end-task(s). Furthermore, the method can be used to roughly estimate whether a specific kind of end-task(s) can be learned form, or is represented in, a given unlabeled dataset, e.g. using publicly available probing tasks. We evaluate our method for diverse word embedding probing tasks and by size of embedding training corpus -- i.e. to explore its use in reduced (pretraining-resource) settings.
Keywords: Unsupervised Learning, Representation Learning, Transfer Learning
TL;DR: Morty refits pretrained word embeddings to either: (a) improve overall embedding performance (for Multi-task settings) or improve Single-task performance, while requiring only minimal effort.
0 Replies

Loading