Efficient Levenberg-Marquardt for SLAM

Published: 10 Oct 2024, Last Modified: 07 Dec 2024NeurIPS 2024 WorkshopEveryoneRevisionsBibTeXCC BY 4.0
Keywords: LM, GN, SLAM, BA
TL;DR: Make Levenberg-Marquardt more efficient by employing Reinforcement Learning to determine when calculating GN is not required
Abstract: The Levenberg-Marquardt optimization algorithm is widely used in many applications and is well-known for its use in Bundle Adjustment (BA), a common method for solving localization and mapping problems. BA is an iterative process in which a system of non-linear equations is solved using two optimization methods: Gauss-Newton (GN), which requires considerable computational resources due to the calculation of the Hessian, and Gradient Descent (GD). Both methods are weighted by a damping factor, $\lambda$, which is heuristically chosen by the Levenberg-Marquardt algorithm at each iteration. Each method is better suited to different parts of the solving process. However, in the classic approach, the computationally expensive GN is calculated in every iteration, even though it may not be necessary in all cases. Therefore, we propose predicting in which iterations the GN calculation can be skipped altogether by viewing the problem holistically and formulating it as a Reinforcement Learning (RL) task, by extending a previous solution that also predicts the value of $\lambda$. We demonstrate that our method reduces the time required for BA convergence by an average of 12.5%.
Submission Number: 27
Loading