Partition Matters in Learning and Learning-to-Learn Implicit Neural RepresentationsDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: implicit neural representations, partition, meta-learning
Abstract: $\textit{Implicit neural representation}$ (INR) aims to learn a $\textit{continuous function}$ (i.e., a neural network) to represent an image, where the input and output of the function are pixel coordinator and RGB/Gray values, respectively. However, images tend to consist of many objects whose colors are not perfectly consistent, resulting in the challenge that image itself is actually a $\textit{discontinuous piecewise function}$ and cannot be well estimated by a continuous function. In this paper, we empirically investigate that if a neural network is enforced to fit a discontinuous piecewise function (e.g., a step function) to reach a fixed small error $\epsilon$, the time costs will increase exponentially. We name this phenomenon as $\textit{exponential-increase}$ hypothesis. Obviously, handling an image with many objects is almost impossible in INR if this hypothesis is true. To address this issue, we first prove that partitioning a complex signal into several sub-regions and utilizing piecewise INRs to fit that signal can significantly reduce the convergence time, even when the exponential-increase hypothesis is true. Based on this fact, we introduce two partition-based INR methods: one for learning INRs, and the other for learning-to-learn INRs. Both methods are designed to partition an image into different sub-regions, and dedicate smaller networks for each part. In addition, we further propose two partition rules based on regular grids and semantic segmentation maps, respectively. Extensive experiments validate the effectiveness of the proposed partitioning methods in terms of learning INR for a single image (ordinary learning framework) and the learning-to-learn framework.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
TL;DR: We use partition techniques to speed up the convergence of learning INRs and learning-to-learn INRs.
6 Replies

Loading