Hybrid Symbolic-Neural Models for Dynamical Systems

18 Sept 2025 (modified: 24 Nov 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Hybrid Machine Learning, Orthogonal Regularization, Symbolic Regression
TL;DR: We learn dynamical systems in hybrid settings where both the symbolic part and the neural part are discovered from data while ensuring no overlap between them.
Abstract: Dynamical systems are fundamental to modeling the natural world, yet face a persistent trade-off: manually prescribed mechanistic models are interpretable by design but often overly simplistic and misspecified, while flexible data-driven neural methods lack physical insight. Hybrid modeling aims for the best of both worlds by combining a symbolic, physics-based component with a flexible neural network. A critical challenge, however, is that the neural component may relearn mechanistic parts yielding redundant and uninterpretable models, especially when the symbolic structure itself is discovered from data. Existing methods using standard L2 regularization fail to prevent this overlap in non-convex optimization landscapes created by symbolic regression. We introduce **OrthoReg** (Orthogonal Regularization), an approach that enforces explicit orthogonality between the symbolic and neural components. This guarantees a unique and complementary decomposition preventing the neural component from learning dynamics that can be captured by the symbolic model. We demonstrate empirically on benchmark dynamical systems that OrthoReg improves out-of-distribution generalization, symbolic identification, and sparsity, thereby establishing a new paradigm for building more robust and interpretable hybrid models.
Primary Area: neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)
Submission Number: 11061
Loading