Improved identification accuracy in equation learning via comprehensive $\boldsymbol{R^2}$-elimination and Bayesian model selection
Abstract: In the field of equation learning, exhaustively considering all possible combinations derived from a basis function dictionary is infeasible. Sparse regression and greedy algorithms have emerged as popular approaches to tackle this challenge. However, the presence of strong collinearities poses difficulties for sparse regression techniques, and greedy steps may inadvertently exclude important components of the true equation, leading to reduced identification accuracy. In this article, we present a novel algorithm that strikes a balance between comprehensiveness and efficiency in equation learning. Inspired by stepwise regression, our approach combines the coefficient of determination, $R^2$, and the Bayesian model evidence, $p(y|\mathcal{M})$, in a novel way. Through three extensive numerical experiments involving random polynomials and dynamical systems, we compare our method against two standard approaches, four state-of-the-art methods, and bidirectional stepwise regression incorporating $p(y|\mathcal{M})$. The results demonstrate that our less greedy algorithm surpasses all other methods in terms of identification accuracy. Furthermore, we discover a heuristic approach to mitigate the overfitting penalty associated with $R^2$ and propose an equation learning procedure solely based on $R^2$, which achieves high rates of exact equation recovery.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: All points of reviewer F8VX have been addressed, as well as the main suggestions of the other reviewers.
We changed the title to avoid confusion about "less greedy" as pointed out by the reviewers. Consequently, the notation for our methods has also changed (e.g. LG for "less greedy" now is CS for "comprehensive search").
We followed the recommendation of most reviewers and moved the algorithms from the appendix to the main text, and instead moved the schematic illustration of our approach to the appendix.
As requested by the reviewers, we included a complexity and an empirical runtime analysis of our comprehensive search method.
We expanded the literature review as requested by the reviews.
Based on the suggestions of reviewer F8VX, we improved the clarity of the description of our approach, in particular around the classification of greedy and thresholding algorithms, error term assumptions, hyperparameter choices, definition of models through selections of basis functions, notation to correctly reflect variables, data-vectors and data-matrices, clearer explanation of the model evidence in the appendix, including mentions of the weak formulation of differential equations, and minor corrections/improvements.
We checked for formatting issues and now distinguish citet and citep citations.
We revised the accompanying code to run with current versions of packages.
Supplementary Material: zip
Assigned Action Editor: ~Stephen_Becker1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 1307
Loading