Context-Informed Neural ODEs Unexpectedly Identify Broken Symmetries: Insights from the Poincaré–Hopf Theorem
TL;DR: Neural ODEs trained exclusively on pre-bifurcation data can forecast symmetry-breaking post-bifurcation behaviors, which can be interpreted through Poincaré index theory.
Abstract: Out-Of-Domain (OOD) generalization is a significant challenge in learning dynamical systems, especially when they exhibit bifurcation, a sudden topological transition triggered by a model parameter crossing a critical threshold. A prevailing belief is that machine learning models, unless equipped with strong priors, struggle to generalize across bifurcations due to the abrupt changes in data characteristics. Contrary to this belief, we demonstrate that context-dependent Neural Ordinary Differential Equations (NODEs), trained solely on localized, pre-bifurcation, symmetric data and without physics-based priors, can still identify post-bifurcation, symmetry-breaking behaviors, even in a zero-shot manner. We interpret this capability to the model's implicit utilization of topological invariants, particularly the Poincaré index, and offer a formal explanation based on the Poincaré–Hopf theorem. We derive the conditions under which NODEs can recover—or erroneously hallucinate—broken symmetries without explicit training. Building on this insight, we showcase a topological regularizer inspired by the Poincaré–Hopf theorem and validate it empirically on phase transitions of systems described by the Landau–Khalatnikov equation.
Lay Summary: Many events like water freezing, heart arrhythmias, or market crashes undergo sudden shifts when a small change pushes them past a critical point; such phenomena are called bifurcations. Machine learning (ML) is powerful for modeling complex systems, but it usually struggles to predict such sharp changes if it has never seen them before. Surprisingly, we discovered that certain ML models, trained only on pre-transition data, can predict what happens after a bifurcation. This is possible because the models implicitly leverage deeper structures—called topological invariants—that stay constant despite dramatic changes in the system. Just as a donut and a mug are topologically identical because each has one hole, certain topological properties remain preserved internally after a bifurcation. These act as internal guides, helping models infer what comes next. One such invariant is the Poincaré index, a hidden fingerprint of the system’s dynamics. Building on this idea, we explore how ML can leverage such clues to predict beyond observations. Furthermore, we design a new method to learn complicated phase transitions, achieving promising results. With further validation, this could help forecast events like heart issues or other critical transitions hidden in seemingly regular data—reshaping how we prepare for the unexpected.
Primary Area: Applications->Chemistry, Physics, and Earth Sciences
Keywords: neural ODE, dynamical system, bifurcation, symmetry breaking, out-of-domain, Poincaré index
Submission Number: 7637
Loading