COIN++: Neural Compression Across Modalities

Published: 07 Dec 2022, Last Modified: 28 Feb 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Neural compression algorithms are typically based on autoencoders that require specialized encoder and decoder architectures for different data modalities. In this paper, we propose COIN++, a neural compression framework that seamlessly handles a wide range of data modalities. Our approach is based on converting data to implicit neural representations, i.e. neural functions that map coordinates (such as pixel locations) to features (such as RGB values). Then, instead of storing the weights of the implicit neural representation directly, we store modulations applied to a meta-learned base network as a compressed code for the data. We further quantize and entropy code these modulations, leading to large compression gains while reducing encoding time by two orders of magnitude compared to baselines. We empirically demonstrate the feasibility of our method by compressing various data modalities, from images and audio to medical and climate data.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Updates made since the last revision: - We have updated the text to better clarify the claims of the ease of applicability of INR based compression compared to other neural compression methods (abstract, Section 1, Section 4.2, Section 4.3, Section 5) - We have included a discussion of codecs on manifolds to better situate the climate experiments in the literature (Section 4.3) - We have updated Figure 1 and its caption to improve clarity - We have updated Figure 2 and its caption to improve clarity - We have added link to the code both in the paper and on OpenReview
Code: https://github.com/EmilienDupont/coinpp
Assigned Action Editor: ~Yingzhen_Li1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 485
Loading