Scalable and Efficient Exploration via Intrinsic Rewards in Continuous-time Dynamical Systems

Published: 12 Jun 2025, Last Modified: 08 Jul 2025EXAIT@ICML 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Track: Robotics
Keywords: continuous-time reinforcement learning, model-based RL, intrinsic rewards, epistemic uncertainty, exploration-exploitation trade-off
TL;DR: We propose COMBRL, a scalable continuous-time RL algorithm that balances reward and epistemic uncertainty using intrinsic rewards, achieving sublinear regret and strong performance in both supervised and unsupervised settings.
Abstract: Reinforcement learning algorithms are typically designed for discrete-time dynamics, even though the underlying real-world control systems are often continuous in time. In this paper, we study the problem of continuous-time reinforcement learning, where the unknown system dynamics are represented using nonlinear ordinary differential equations (ODEs). We leverage probabilistic models, such as Gaussian processes and Bayesian neural networks, to learn an uncertainty-aware model of the underlying ODE. Our algorithm, COMBRL, greedily maximizes a weighted sum of the extrinsic reward and model epistemic uncertainty. We show that this approach has sublinear regret for the continuous-time setting. Furthermore, in the unsupervised RL setting (i.e., without extrinsic rewards), we provide a sample complexity bound. In our experiments, we evaluate COMBRL in the standard and unsupervised RL settings and show that it outperforms the baselines across several deep RL tasks.
Serve As Reviewer: ~Klemens_Iten1
Submission Number: 63
Loading