Variational image compression with a scale hyperprior

Anonymous

Nov 03, 2017 (modified: Nov 03, 2017) ICLR 2018 Conference Blind Submission readers: everyone Show Bibtex
  • Abstract: We describe an end-to-end trainable model for image compression based on variational autoencoders. The model incorporates a hyperprior into the generative model to effectively capture spatial dependencies in the latent representation. This hyperprior relates to side information, a concept universal to virtually all modern image codecs but largely unexplored for image compression with artificial neural networks (ANNs). Unlike existing autoencoder compression methods, our model trains a powerful entropy model jointly with the underlying autoencoder. We demonstrate that this model leads to state-of-the-art image compression when measuring visual quality using the popular MS-SSIM index, and yields rate--distortion performance surpassing published ANN-based methods when evaluated using a more traditional metric based on squared error (PSNR).

Loading