Forced Apart: Discovering Disentangled Representations Without Exhaustive LabelsDownload PDF

15 Feb 2018 (modified: 10 Feb 2022)ICLR 2018 Conference Blind SubmissionReaders: Everyone
Abstract: Learning a better representation with neural networks is a challenging problem, which has been tackled from different perspectives in the past few years. In this work, we focus on learning a representation that would be useful in a clustering task. We introduce two novel loss components that substantially improve the quality of produced clusters, are simple to apply to arbitrary models and cost functions, and do not require a complicated training procedure. We perform an extensive set of experiments, supervised and unsupervised, and evaluate the proposed loss components on two most common types of models, Recurrent Neural Networks and Convolutional Neural Networks, showing that the approach we propose consistently improves the quality of KMeans clustering in terms of mutual information scores and outperforms previously proposed methods.
TL;DR: A novel loss component that forces the network to learn a representation that is well-suited for clustering during training for a classification task.
Keywords: learning representation, clustering, loss
Data: [CIFAR-10](https://paperswithcode.com/dataset/cifar-10)
8 Replies

Loading