Autoencoding Implicit Neural Representations for Image Compression

Published: 11 Jul 2023, Last Modified: 16 Jul 2023NCW ICML 2023EveryoneRevisionsBibTeX
Keywords: Implicit Neural Representation, Neural Compression, Nonlinear Transform Coding, Image Compression
Abstract: Implicit Neural Representations (INRs) are increasingly popular methods for representing a variety of signals (Sitzmann et al., 2020b; Park et al., 2019; Mildenhall et al., 2021). Given their advantages over traditional signal representations, there are strong incentives to leverage them for signal compression. Here we focus on image compression, where recent INR-based approaches learn a base INR network shared across images, and infer/quantize a latent representation for each image in a second stage (Dupont et al., 2022; Schwarz & Teh, 2022; Schwarz et al., 2023). In this work, we view these approaches as special cases of nonlinear transform coding (NTC), and instead propose an end-to-end approach directly optimized for rate-distortion (R-D) performance. We essentially perform NTC with an INR-based decoder, achieving significantly faster training and improved R-D performance, although still falling short of that of state-of-the-art NTC approaches. By viewing an INR base network as a convolutional decoder with 1x1 convolutions, we can also better understand its inferior R-D performance through this inherent architectural constraint.
Submission Number: 14
Loading