Learning Signal-Agnostic Manifolds of Neural FieldsDownload PDF

Published: 09 Nov 2021, Last Modified: 08 Sept 2024NeurIPS 2021 PosterReaders: Everyone
Keywords: Neural Fields, Coordinate-based, Implicit Function, Manifold, Multimodal, Generative Model, Cross-Modal
TL;DR: We learn a modality independent method to capture the underlying manifold structure of a set of data points.
Abstract: Deep neural networks have been used widely to learn the latent structure of datasets, across modalities such as images, shapes, and audio signals. However, existing models are generally modality-dependent, requiring custom architectures and objectives to process different classes of signals. We leverage neural fields to capture the underlying structure in image, shape, audio and cross-modal audiovisual domains in a modality-independent manner. We cast our task as one of learning a manifold, where we aim to infer a low-dimensional, locally linear subspace in which our data resides. By enforcing coverage of the manifold, local linearity, and local isometry, our model -- dubbed GEM -- learns to capture the underlying structure of datasets across modalities. We can then travel along linear regions of our manifold to obtain perceptually consistent interpolations between samples, and can further use GEM to recover points on our manifold and glean not only diverse completions of input images, but cross-modal hallucinations of audio or image signals. Finally, we show that by walking across the underlying manifold of GEM, we may generate new samples in our signal domains.
Supplementary Material: pdf
Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
Code: https://yilundu.github.io/gem
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 5 code implementations](https://www.catalyzex.com/paper/learning-signal-agnostic-manifolds-of-neural/code)
13 Replies

Loading