Unsupervised Learning of Structured Representation via Closed-Loop Transcription

Published: 20 Nov 2023, Last Modified: 05 Dec 2023CPAL 2024 (Proceedings Track) OralEveryoneRevisionsBibTeX
Keywords: Unsupervised/Self-supervised Learning, Closed-Loop Transcription
Abstract: This paper proposes an unsupervised method for learning a unified representation that serves both discriminative and generative purposes. While most existing unsupervised learning approaches focus on a representation for only one of these two goals, we show that a unified representation can enjoy the mutual benefits of having both. Such a representation is attainable by generalizing the recently proposed closed-loop transcription framework, known as CTRL, to the unsupervised setting. This entails solving a constrained maximin game over a rate reduction objective that expands features of all samples while compressing features of augmentations of each sample. Through this process, we see discriminative low-dimensional structures emerge in the resulting representations. Under comparable experimental conditions and network complexities, we demonstrate that these structured representations enable classification performance close to state-of-the-art unsupervised discriminative representations, and conditionally generated image quality significantly higher than that of state-of-the-art unsupervised generative models.
Track Confirmation: Yes, I am submitting to the proceeding track.
Submission Number: 6