Lower Bounding Rate-Distortion From SamplesDownload PDF

Published: 01 Apr 2021, Last Modified: 05 May 2023Neural Compression Workshop @ ICLR 2021Readers: Everyone
Keywords: rate-distortion, lossy compression, stochastic optimization, information theory, neural networks, deep generative modeling
TL;DR: We propose to estimate a lower bound on the rate-distortion function of a general data source from its samples. UPDATE: see follow-up work at ICLR22, Towards Empirical Sandwich Bounds on the Rate-Distortion Function (https://arxiv.org/abs/2111.12166)
Abstract: The rate-distortion function tells us the minimal number of bits on average to compress a random object within a given distortion tolerance. A lower bound on the rate-distortion function therefore represents a fundamental limit on the best possible rate-distortion performance of any lossy compression algorithm, and can help us assess the potential room for improvement. We make a first attempt at an algorithm for estimating such a lower bound from data samples, applicable to general memoryless data sources. Based on a dual characterization of $R(D)$ (Csiszár, 1974), our method solves a constrained maximization problem over a family of functions parameterized by neural networks. On a 2D Gaussian source, we obtain a lower bound within 1 bit of the analytical rate-distortion function. Our code can be found at https://github.com/mandt-lab/empirical-RD-sandwich.
1 Reply

Loading