Keywords: Reinforcement learning; Model-based reinforcement learning; Autonomous racing; Physics-informed learning; Data augmentation
TL;DR: We show that physics-informed data augmentation with lightweight models improves sample efficiency, safety, and generalization in model-based RL for autonomous racing.
Abstract: A central challenge in reinforcement learning (RL) is achieving agents that generalize and adapt to new tasks and conditions. Many works address this via offline RL which is constrained by dataset coverage, or online RL which requires costly and potentially unsafe exploration. We propose a framework for rapid adaptation of RL agents by augmenting model-based RL with physics-informed data augmentation. Specifically, we use lightweight analytical models to generate stable, physics-grounded rollouts that complement real interaction data and allows the model-based RL agent to adapt in just a few trials. We validate our approach in autonomous racing, an extreme testbed with fast dynamics and strict safety constraints, using Assetto Corsa paired with lightweight vehicle models for data augmentation. Across diverse tracks and surfaces, our method achieves faster convergence, lower lap times, and fewer incidents than a set of strong baselines.
Although demonstrated in racing, our framework is domain-agnostic, offering a practical path to data-efficient control wherever simple models exist as priors.
Primary Area: reinforcement learning
Submission Number: 4974
Loading