COIN: COmpression with Implicit Neural representationsDownload PDF

Published: 01 Apr 2021, Last Modified: 22 Oct 2023Neural Compression Workshop @ ICLR 2021Readers: Everyone
Keywords: compression, implicit neural representations, function representations
TL;DR: We overfit MLPs to images and transmit their weights as compressed codes for the images.
Abstract: We propose a new simple approach for image compression: instead of storing the RGB values for each pixel of an image, we store the weights of a neural network overfitted to the image. Specifically, to encode an image, we fit it with an MLP which maps pixel locations to RGB values. We then quantize and store the weights of this MLP as a code for the image. To decode the image, we simply evaluate the MLP at every pixel location. We found that this simple approach outperforms JPEG at low bit-rates, even without entropy coding or learning a distribution over weights. While our framework is not yet competitive with state of the art compression methods, we show that it has various attractive properties which could make it a viable alternative to other neural data compression approaches.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2103.03123/code)
1 Reply

Loading