OmniField: Conditioned Neural Fields for Robust Multimodal Spatiotemporal Learning

Published: 26 Jan 2026, Last Modified: 11 Mar 2026ICLR 2026 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Conditioned Neural Fields, Multimodal Learning, Spatiotemporal Learning, Scientific Data, Neural Fields
TL;DR: Spatiotemporal scientific data are inherently multimodal yet sparse, noisy, and irregular; we introduce OmniField, a multimodal conditioned neural field for unified robust spatiotemporal representation learning.
Abstract: Multimodal spatiotemporal learning on real-world experimental data is constrained by two challenges: within-modality measurements are sparse, irregular, and noisy (QA/QC artifacts) but cross-modally correlated; the set of available modalities varies across space and time, shrinking the usable record unless models can adapt to arbitrary subsets at train and test time. We propose OmniField, a continuity-aware framework that learns a continuous neural field conditioned on available modalities and iteratively fuses cross-modal context. A multimodal crosstalk block architecture paired with iterative cross-modal refinement aligns signals prior to the decoder, enabling unified reconstruction, interpolation, forecasting, and cross-modal prediction without gridding or surrogate preprocessing. Extensive evaluations show that OmniField consistently outperforms eight strong multimodal spatiotemporal baselines. Under heavy simulated sensor noise, performance remains close to clean-input levels, highlighting robustness to corrupted measurements.
Supplementary Material: zip
Primary Area: applications to physical sciences (physics, chemistry, biology, etc.)
Submission Number: 9719
Loading