Fine-Tuning of Neural Network Approximate MPC without Retraining via Bayesian Optimization

Published: 22 Oct 2024, Last Modified: 06 Nov 2024CoRL 2024 Workshop SAFE-ROL PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Model Predictive Control, Bayesian Optimization, Imitation Learning, Neural Network Control
TL;DR: We auto-tune neural network approximate MPC controllers without retraining from few hardware experiments via Bayesian optimization, which we demonstrate in hardware on a cartpole and a reaction wheel unicycle robot.
Abstract: Approximate model-predictive control (AMPC) aims to imitate an MPC’s behavior with a neural network, removing the need to solve an expensive optimization problem at runtime. However, during deployment, the parameters of the underlying MPC must usually be fine-tuned. This often renders AMPC impractical due to the need to repeatedly generate a new dataset and retrain the neural network. Recent work addresses this problem by adapting AMPC without retraining using approximated sensitivities of the MPC’s optimization problem. However, currently, this adaption must be done by hand, which is labor-intensive and can be unintuitive for high-dimensional systems. To solve this issue, we propose using Bayesian optimization to tune the parameters of AMPC policies based on experimental data. By combining model-based control with direct and local learning, our approach achieves superior performance to nominal AMPC on hardware, with minimal experimentation. This allows automatic and data-efficient adaptation of AMPC to new system instances and fine-tuning to cost functions that are difficult to implement in MPC. We demonstrate the proposed method in hardware experiments for the swing-up maneuver of a cartpole and yaw control of an under-actuated balancing unicycle robot, a challenging control problem.
Supplementary Material: zip
Submission Number: 17
Loading