Implicit Neural Representations with Levels-of-ExpertsDownload PDF

Published: 31 Oct 2022, 18:00, Last Modified: 12 Oct 2022, 04:21NeurIPS 2022 AcceptReaders: Everyone
Keywords: Implicit neural representations, neural fields, coordinate-based networks, hybrid representations, positional encoding
TL;DR: Coordinate-based MLP with periodic and multi-scale position-dependent weights arranged in multi-resolution grids.
Abstract: Coordinate-based networks, usually in the forms of MLPs, have been successfully applied to the task of predicting high-frequency but low-dimensional signals using coordinate inputs. To scale them to model large-scale signals, previous works resort to hybrid representations, combining a coordinate-based network with a grid-based representation, such as sparse voxels. However, such approaches lack a compact global latent representation in its grid, making it difficult to model a distribution of signals, which is important for generalization tasks. To address the limitation, we propose the Levels-of-Experts (LoE) framework, which is a novel coordinate-based representation consisting of an MLP with periodic, position-dependent weights arranged hierarchically. For each linear layer of the MLP, multiple candidate values of its weight matrix are tiled and replicated across the input space, with different layers replicating at different frequencies. Based on the input, only one of the weight matrices is chosen for each layer. This greatly increases the model capacity without incurring extra computation or compromising generalization capability. We show that the new representation is an efficient and competitive drop-in replacement for a wide range of tasks, including signal fitting, novel view synthesis, and generative modeling.
Supplementary Material: zip
18 Replies