Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Lossy Image Compression with Compressive Autoencoders
Lucas Theis, Wenzhe Shi, Andrew Cunningham, Ferenc Huszár
Nov 04, 2016 (modified: Mar 01, 2017)ICLR 2017 conference submissionreaders: everyone
Abstract:We propose a new approach to the problem of optimizing autoencoders for lossy image compression. New media formats, changing hardware technology, as well as diverse requirements and content types create a need for compression algorithms which are more flexible than existing codecs. Autoencoders have the potential to address this need, but are difficult to optimize directly due to the inherent non-differentiabilty of the compression loss. We here show that minimal changes to the loss are sufficient to train deep autoencoders competitive with JPEG 2000 and outperforming recently proposed approaches based on RNNs. Our network is furthermore computationally efficient thanks to a sub-pixel architecture, which makes it suitable for high-resolution images. This is in contrast to previous work on autoencoders for compression using coarser approximations, shallower architectures, computationally expensive methods, or focusing on small images.
TL;DR:A simple approach to train autoencoders to compress images as well or better than JPEG 2000.
Keywords:Computer vision, Deep learning, Applications
Enter your feedback below and we'll get back to you as soon as possible.