Decoding Decoders: Finding Optimal Representation Spaces for Unsupervised Similarity TasksDownload PDF

12 Feb 2018 (modified: 14 Oct 2024)ICLR 2018 Workshop SubmissionReaders: Everyone
Keywords: distributed representations, sentence embedding, representation learning, unsupervised learning, encoder-decoder, RNN
TL;DR: By introducing the notion of an optimal representation space, we provide a theoretical argument and experimental validation that an unsupervised model for sentences can perform well on both supervised similarity and unsupervised transfer tasks.
Abstract: Experimental evidence indicates that simple models outperform complex deep networks on many unsupervised similarity tasks. We provide a simple yet rigorous explanation for this behaviour by introducing the concept of an optimal representation space, in which semantically close symbols are mapped to representations that are close under a similarity measure induced by the model’s objective function. In addition, we present a straightforward procedure that, without any retraining or architectural modifications, allows deep recurrent models to perform equally well (and sometimes better) when compared to shallow models. To validate our analysis, we conduct a set of consistent empirical evaluations and introduce several new sentence embedding models in the process. Even though this work is presented within the context of natural language processing, the insights are readily applicable to other domains that rely on distributed representations for transfer tasks.
Code: [![github](/images/github_icon.svg) Babylonpartners/decoding-decoders](https://github.com/Babylonpartners/decoding-decoders)
Data: [MPQA Opinion Corpus](https://paperswithcode.com/dataset/mpqa-opinion-corpus), [SICK](https://paperswithcode.com/dataset/sick)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 4 code implementations](https://www.catalyzex.com/paper/decoding-decoders-finding-optimal/code)
1 Reply

Loading