Forced to Learn: Discovering Disentangled Representations Without Exhaustive LabelsDownload PDF

03 May 2025 (modified: 30 Mar 2017)ICLR 2017Readers: Everyone
Abstract: Learning a better representation with neural networks is a challenging problem, which was tackled extensively from different prospectives in the past few years. In this work, we focus on learning a representation that could be used for clustering and introduce a novel loss component that substantially improves the quality of produced clusters, is simple to apply to an arbitrary cost function, and does not require a complicated training procedure.
TL;DR: A novel loss component that leads to substantial improvement of KMeans clustering over the learned representations.
Conflicts: cs.uml.edu
5 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview