Keywords: Symbolic Regression, Inductive Bias, Functional Separability, Recursive Decomposition, Deep Reinforcement Learning, Hierarchical Graph Structure, Levenberg–Marquardt Optimization, Scientific Machine Learning
TL;DR: We recursively detect additive and multiplicative separabilities in data, decomposing it into simpler components. This inductive bias guides reinforcement learning–based symbolic regression, achieving state-of-the-art results on SRBench Feynman.
Abstract: Symbolic regression (SR) can recover analytic laws from data, but its search space is enormous. Many scientific targets are structurally simple, for example additively or multiplicatively separable, yet most SR pipelines do not exploit this. We introduce a recursive structure discovery step that tests for separability using accurate derivatives from a small neural model trained with second-order updates. The method decomposes $y=f(\mathbf{x})$ into a hierarchy of simpler subfunctions, which we feed to SR as a structure prior. This plug-in reduces search complexity, improves interpretability, and can attach to any SR backend; here we pair it with a deep RL generator. This substantially reduces search complexity, improves interpretability, and remains robust to noise, maintaining reliable separability detection under challenging conditions. On SRBench (Feynman, 120 equations), the structure-aware pipeline achieves state-of-the-art exact recovery, outperforming separability-only, pure RL, and prior hybrid baselines.
Primary Area: neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)
Submission Number: 22845
Loading