Tokenized Neural Fields: Structured Representations of Continuous Signals

Published: 23 Sept 2025, Last Modified: 23 Dec 2025SPIGM @ NeurIPSEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Tokenized neural fields, Implicit neural representations, Probabilistic inference, Generative modeling, signal representation
TL;DR: We introduce Tokenized Neural Fields, representing continuous signals with compact tokens and a shared decoder, enabling efficient reconstruction, emergent structure, and probabilistic inference for generative modeling across 1D, 2D, and 3D.
Abstract: We introduce Tokenized Neural Fields (TNF), a unified framework for representing continuous signals through a compact set of learnable tokens. Unlike encoder-based pipelines or global latent codes, TNF provides a structured tokenization in which individual tokens specialize to distinct aspects of a signal and interact with coordinate queries via cross-attention. This decoupling of representation from decoder architecture enables scalable training across modalities, efficient adaptation to new signals, and a natural basis for probabilistic inference in token space. We validate TNF across 1D function regression, 2D image reconstruction, and 3D scene modeling, showing that tokenized representations achieve superior fidelity with fewer parameters compared to encoder- or latent-based baselines. Beyond accurate reconstructions, TNF tokens exhibit emergent specialization, support interpolation and morphing, and enable generative modeling when paired with diffusion transformers. Together, these results highlight tokenization as a powerful paradigm for bridging implicit neural representations with the structured inference and generative capabilities increasingly central to large foundation models.
Submission Number: 56
Loading