CAUSAL REASONING WITH LARGE FOUNDATION MOD- ELS TO GUIDE DYNAMIC SYSTEM FORECASTING

ICLR 2026 Conference Submission13309 Authors

18 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: AI for Science
Abstract: Prevailing data-driven models for spatio-temporal forecasting excel at interpolating within known patterns but often falter in critical real-world scenarios. This failure stems from a fundamental flaw: they learn \textbf{spurious correlations} from raw data, bypassing the underlying semantic and physical principles that form the true causal pathways. To address this, we introduce \textbf{\method{}} (Physics-informed Reasoning and Interpretation for Spatio-temporal Modeling), a framework that performs a \textbf{principled causal intervention}. \method{} employs a Vision-Language Model (VLM) to interpret spatio-temporal snapshots into semantic narratives, and a Large Language Model (LLM) to reason with these narratives and explicit physical laws, generating a causally-informed textual guidance. This guidance is then encoded to steer a downstream numerical predictor. Extensive experiments across fluid dynamics, weather forecasting, and urban traffic demonstrate that this intervention significantly enhances model capabilities. By repairing the causal chain, boosts \textit{out-of-distribution (OOD) generalization}, improves \textit{prediction under data sparsity}, and sharpens \textit{extreme event prediction}. Our work pioneers a new paradigm that unifies the pattern recognition of traditional models with the causal reasoning of large foundation models, paving the way for more reliable AI in science.
Primary Area: applications to physical sciences (physics, chemistry, biology, etc.)
Submission Number: 13309
Loading