Decoding Decoders: Finding Optimal Representation Spaces for Unsupervised Similarity TasksDownload PDF

15 Feb 2018 (modified: 03 Apr 2024)ICLR 2018 Conference Blind SubmissionReaders: Everyone
Abstract: Experimental evidence indicates that simple models outperform complex deep networks on many unsupervised similarity tasks. Introducing the concept of an optimal representation space, we provide a simple theoretical resolution to this apparent paradox. In addition, we present a straightforward procedure that, without any retraining or architectural modifications, allows deep recurrent models to perform equally well (and sometimes better) when compared to shallow models. To validate our analysis, we conduct a set of consistent empirical evaluations and introduce several new sentence embedding models in the process. Even though this work is presented within the context of natural language processing, the insights are readily applicable to other domains that rely on distributed representations for transfer tasks.
TL;DR: By introducing the notion of an optimal representation space, we provide a theoretical argument and experimental validation that an unsupervised model for sentences can perform well on both supervised similarity and unsupervised transfer tasks.
Keywords: distributed representations, sentence embedding, representation learning, unsupervised learning, encoder-decoder, RNN
Code: [![github](/images/github_icon.svg) Babylonpartners/decoding-decoders](https://github.com/Babylonpartners/decoding-decoders)
Data: [MPQA Opinion Corpus](https://paperswithcode.com/dataset/mpqa-opinion-corpus), [SICK](https://paperswithcode.com/dataset/sick)
10 Replies

Loading