On reducing the correlation of bottleneck representations in AutoencodersDownload PDF

Anonymous

04 Mar 2021 (modified: 05 May 2023)ICLR 2021 Workshop Neural Compression Blind SubmissionReaders: Everyone
Keywords: Autoencoders, compression, Diversity, correlation
TL;DR: We propose a scheme to avoid redundant features in the bottleneck representation of the autoencoders
Abstract: Image compression is an important image processing task. Recently, there has been more interest in using autoencoders (AEs) to solve this task. An AE has two goals: (i) compress the original input to a low-dimensional space, at the bottleneck of the network topology, using the encoder (ii) reconstruct the input from the representation at the bottleneck using the decoder. Both parts are optimized jointly by minimizing a distortion-based loss which implicitly forces the model to keep only the variations in the input data required to reconstruct the input without persevering the redundancies. In this paper, we propose a scheme to explicitly penalize feature redundancies in the bottleneck representation. To this end, we propose an additional loss term, based on the pair-wise correlation of the neurons, which complements the standard reconstruction loss forcing the encoder to learn a more diverse and richer representation of the input. The proposed approach is tested using the MNIST dataset and leads to superior experimental results.
1 Reply

Loading