Lossy Compression for Lossless PredictionDownload PDF

Published: 01 Apr 2021, Last Modified: 22 Oct 2023Neural Compression Workshop @ ICLR 2021Readers: Everyone
Keywords: Compression, Invariances, Information Theory, Machine Learning, Self-Supervised Learning
TL;DR: We formalize and experimentally evaluate the notion of compression that guarantees high downstream predictive performance under invariances.
Abstract: Most data is automatically collected and only ever "seen" by algorithms. Yet, data compressors preserve perceptual fidelity rather than just the information needed by algorithms performing downstream tasks. In this paper, we characterize the minimum bit-rate required to ensure high performance on all predictive tasks that are invariant under a set of transformations, such as data augmentations. Based on our theory, we design unsupervised objectives for training neural compressors. Using these objectives, we achieve rate savings of around 60\% on standard datasets, like MNIST, without decreasing classification performance.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2106.10800/code)
1 Reply

Loading