Abstract: Parameter estimation algorithms using higher order gradient-based methods are increasingly sought after in machine learning. Such methods however, may become unstable when regressors are time-varying. Inspired by techniques employed in adaptive systems, this letter proposes a new variational perspective to derive four higher order tuners with provable stability guarantees. This perspective includes concepts based on higher order tuners and normalization and allows stability to be established for problems with time-varying regressors. The stability analysis builds on a novel technique which stems from symplectic mechanics, that links Lagrangians and Hamiltonians to the underlying Lyapunov stability analysis, and is provided for common linear-in-parameter models.
0 Replies
Loading