- Keywords: compression, variational inference, lossless compression, deep latent variable models
- TL;DR: We scale up lossless compression with latent variables, achieving state of the art on full-size ImageNet images.
- Abstract: We make the following striking observation: fully convolutional VAE models trained on 32x32 ImageNet can generalize well, not just to 64x64 but also to far larger photographs, with no changes to the model. We use this property, applying fully convolutional models to lossless compression, demonstrating a method to scale the VAE-based 'Bits-Back with ANS' algorithm for lossless compression to large color photographs, and achieving state of the art for compression of full size ImageNet images. We release Craystack, an open source library for convenient prototyping of lossless compression using probabilistic models, along with full implementations of all of our compression results.
- Code: https://github.com/hilloc-submission/hilloc
- Original Pdf: pdf