Meta-Learning Sparse Compression Networks

Published: 07 Sept 2022, Last Modified: 28 Feb 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Recent work in Deep Learning has re-imagined the representation of data as functions mapping from a coordinate space to an underlying continuous signal. When such functions are approximated by neural networks this introduces a compelling alternative to the more common multi-dimensional array representation. Recent work on such Implicit Neural Representations(INRs) has shown that - following careful architecture search - INRs can outperform established compression methods such as JPEG (e.g. Dupont et al., 2021). In this paper, we propose crucial steps towards making such ideas scalable: Firstly, we employ state-of-the-art network sparsification techniques to drastically improve compression. Secondly,introduce the first method allowing for sparsification to be employed in the inner-loop of commonly used Meta-Learning algorithms, drastically improving both compression and the computational cost of learning INRs. The generality of this formalism allows us to present results on diverse data modalities such as images, manifolds, signed distance functions, 3D shapes and scenes, several of which establish new state-of-the-art results.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: While this is the first submission to TMLR (or any other venue), we will use this as updates on changes made during the review process. Revision 1: - Added the requested comparison to modern compression schemes by adding results on CIFAR10 and Kodak. Direct comparison to COIN++ as suggested by the reviewers. - Addressed various typos and minor errors pointed out by reviewers Revision 2: - Added requested comparison when using MSCN to obtain a fully dense final network vs only sparsifying the task-specific changes - Made Greek letters bold for consistency of vector notation - Fixed incorrect colour in Figure 5 (middle) Revision 3: - Added model diagram in introduction to provide intuitive understanding - Added full explanation of compression performance and Hyperparameter Table (Appendix) - Added two plots showing compression performance as a function of the number of bits and a plot showing the effect of entropy coding - Added qualitative results for CIFAR10 - Added EMA ablation in the Appendix Revision 4: - Added comments about Runtime and Computational Requirements in the Appendix Revision 5: - Added qualitative results on Kodak
Assigned Action Editor: ~Stephan_M_Mandt1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 103
Loading