Keywords: weight space learning, representation learning, metanetworks, graph metanetworks, neural fields, neural radiance fields, NeRF, implicit neural representations, INR
TL;DR: We present the first framework that performs tasks on NeRFs by processing their weights and is able to work on diverse architectures
Abstract: Neural Radiance Fields (NeRFs) have emerged as a groundbreaking paradigm for representing 3D objects and scenes by encoding shape and appearance information into the weights of a neural network. Recent studies have demonstrated that these weights can be used as input for frameworks designed to address deep learning tasks; however, such frameworks require NeRFs to adhere to a specific, predefined architecture. In this paper, we introduce the first framework capable of processing NeRFs with diverse architectures and performing inference on architectures unseen at training time. We achieve this by training a Graph Meta-Network within an unsupervised representation learning framework, and show that a contrastive objective is conducive to obtaining an architecture-agnostic latent space. In experiments conducted across 13 NeRF architectures belonging to three families (MLPs, tri-planes, and, for the first time, hash tables), our approach demonstrates robust performance in classification, retrieval, and language tasks involving multiple architectures, even unseen at training time, while also matching or exceeding the results of existing frameworks limited to single architectures. Our code and data are available at [this https URL](https://cvlab-unibo.github.io/gmnerf).
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 14275
Loading