Robust Distributed Compression with Learned Heegard—Berger Scheme

Published: 15 Apr 2024, Last Modified: 06 May 2024Learn to Compress @ ISIT 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Distributed source coding, Wyner—Ziv coding, Heegard—Berger coding, lossy compression, binning, learning, neural networks, rate—distortion thoery
TL;DR: We propose robust learning-based lossy compression schemes addressing the Heegard—Berger problem, where decoder-only side information may be unavailable. Our proposed schemes achieve rate—distortion performances close to information-theoretic bounds.
Abstract: We consider lossy compression of an information source when decoder-only side information may be absent. This setup, also referred to as the Heegard—Berger or Kaspi problem, is a special case of robust distributed source coding. Building upon previous works on neural network-based distributed compressors developed for the decoder-only side information (Wyner—Ziv) case, we propose learning-based schemes that are amenable to the availability of side information. We find that our learned compressors mimic the achievability part of the Heegard—Berger theorem and yield interpretable results operating close to information-theoretic bounds. Depending on the availability of the side information, our neural compressors recover characteristics of the point-to-point (i.e., with no side information) and the Wyner—Ziv coding strategies that include binning in the source space, although no structure exploiting knowledge of the source and side information was imposed into the design.
Submission Number: 10
Loading