Abstract: Deep latent variable models have seen recent success in many data domains. Lossless compression is an application of these models which, despite having the potential to be highly useful, has yet to be implemented in a practical manner. We present '`Bits Back with ANS' (BB-ANS), a scheme to perform lossless compression with latent variable models at a near optimal rate. We demonstrate this scheme by using it to compress the MNIST dataset with a variational auto-encoder model (VAE), achieving compression rates superior to standard methods with only a simple VAE. Given that the scheme is highly amenable to parallelization, we conclude that with a sufficiently high quality generative model this scheme could be used to achieve substantial improvements in compression rate with acceptable running time. We make our implementation available open source at https://github.com/bits-back/bits-back .
Keywords: compression, variational auto-encoders, deep latent gaussian models, lossless compression, latent variables, approximate inference, variational inference
TL;DR: We do lossless compression of large image datasets using a VAE, beat existing compression algorithms.
Code: [![github](/images/github_icon.svg) bits-back/bits-back](https://github.com/bits-back/bits-back) + [![Papers with Code](/images/pwc_icon.svg) 1 community implementation](https://paperswithcode.com/paper/?openreview=ryE98iR5tm)
Data: [ImageNet](https://paperswithcode.com/dataset/imagenet), [MNIST](https://paperswithcode.com/dataset/mnist)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/practical-lossless-compression-with-latent/code)
12 Replies
Loading