Lossy Image Compression with Compressive AutoencodersDownload PDF

Published: 06 Feb 2017, Last Modified: 22 Oct 2023ICLR 2017 PosterReaders: Everyone
Abstract: We propose a new approach to the problem of optimizing autoencoders for lossy image compression. New media formats, changing hardware technology, as well as diverse requirements and content types create a need for compression algorithms which are more flexible than existing codecs. Autoencoders have the potential to address this need, but are difficult to optimize directly due to the inherent non-differentiabilty of the compression loss. We here show that minimal changes to the loss are sufficient to train deep autoencoders competitive with JPEG 2000 and outperforming recently proposed approaches based on RNNs. Our network is furthermore computationally efficient thanks to a sub-pixel architecture, which makes it suitable for high-resolution images. This is in contrast to previous work on autoencoders for compression using coarser approximations, shallower architectures, computationally expensive methods, or focusing on small images.
TL;DR: A simple approach to train autoencoders to compress images as well or better than JPEG 2000.
Conflicts: twitter.com, bethgelab.org
Keywords: Computer vision, Deep learning, Applications
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 4 code implementations](https://www.catalyzex.com/paper/arxiv:1703.00395/code)
31 Replies

Loading