Revisiting Associative Compression: I Can't Believe It's Not Better

ICML 2023 Workshop NCW Submission11 Authors

Published: 11 Jul 2023, Last Modified: 11 Aug 2023ICML 2023 Workshop NCW SubmissionEveryoneRevisionsBibTeX
Keywords: neural compression, autoencoders, multisets, conditional generative model
TL;DR: we develop a neural compression method for improving the compression rate of neural compression models by leveraging the order in which things are compressed.
Abstract: Typically, unordered image datasets are individually and sequentially compressed in random order. Unfortunately, general set compression methods that improve over the default sequential treatment yield only small rate gains for high-dimensional objects such as images. We propose an approach for compressing image datasets by using an image-to-image conditional generative model on a reordered dataset. Our approach is inspired by Associative Compression Networks (Graves et al., 2018). Even though this variation of variational auto-encoders was primarily developed for representation learning, the authors of the paper show substantial gains in the lossless compression of latent variables. We apply the core idea of the aforementioned work; adapting the generative prior to a previously seen neighbor image, to a commonly used neural compression model; the mean-scale hyperprior model (MSHP) (Ball ´e et al., 2018; Minnen et al., 2018). However, the architecture changes we propose here are applicable to other methods such as ELIC (He et al., 2022) as well. We train our model on subsets of an ordered version of Imagenet, and report rate-distortion curves on the same dataset. Unfortunately, we only see gains in latent space. Hence we speculate as to the reason why the approach is not leading to more significant improvements.
Submission Number: 11
Loading