Safe Guaranteed Dynamics Exploration with Probabilistic Models

Published: 23 Sept 2025, Last Modified: 01 Dec 2025ARLETEveryoneRevisionsBibTeXCC BY 4.0
Track: Research Track
Keywords: Safe exploration, Active learning, Dynamical system, Gaussian Processes, Safety
Abstract: Deploying agents in the real world is inherently challenging due to their $\textit{a priori}$ unknown dynamics and the need for rigorous safety guarantees. Without an accurate model, the agent risks taking unsafe actions or even failing to complete their tasks. To address this problem, we introduce a notion of maximum safe dynamics learning through sufficient exploration in the space of safe policies. We propose a $\textit{pessimistically}$ safe framework that $\textit{optimistically}$ explores informative states and, despite not reaching them due to model uncertainty, ensures continuous online learning of dynamics. The framework achieves first-of-its-kind results: non-episodically learning the dynamics model sufficiently — up to an arbitrary small tolerance (subject to noise) — in a finite time, while ensuring provably safe operation throughout with high probability. Building on this, we propose an algorithm to maximize rewards while learning the dynamics $\textit{only to the extent needed}$ to achieve close-to-optimal performance. Unlike typical reinforcement learning (RL) methods, our approach operates online in a non-episodic setting and ensures safety throughout the learning process. We demonstrate the effectiveness of our approach in challenging domains such as autonomous car racing and drone navigation under aerodynamic effects — scenarios where safety is critical and accurate modeling is difficult.
Submission Number: 29
Loading