Sample-efficient and Scalable Exploration in Continuous-Time RL

Published: 26 Jan 2026, Last Modified: 11 Feb 2026ICLR 2026 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: continuous-time reinforcement learning, model-based RL, intrinsic rewards, epistemic uncertainty, exploration-exploitation trade-off
TL;DR: We propose COMBRL, a scalable continuous-time RL algorithm that balances reward and epistemic uncertainty using intrinsic rewards, achieving sublinear regret and strong performance in both supervised and unsupervised settings.
Abstract: Reinforcement learning algorithms are typically designed for discrete-time dynamics, even though the underlying real-world control systems are often continuous in time. In this paper, we study the problem of continuous-time reinforcement learning, where the unknown system dynamics are represented using nonlinear ordinary differential equations (ODEs). We leverage probabilistic models, such as Gaussian processes and Bayesian neural networks, to learn an uncertainty-aware model of the underlying ODE. Our algorithm, COMBRL, greedily maximizes a weighted sum of the extrinsic reward and model epistemic uncertainty. This yields a scalable and sample-efficient approach to continuous-time model-based RL. We show that COMBRL achieves sublinear regret in the reward-driven setting, and in the unsupervised RL setting (i.e., without extrinsic rewards), we provide a sample complexity bound. In our experiments, we evaluate COMBRL in both standard and unsupervised RL settings and demonstrate that it scales better, is more sample-efficient than prior methods, and outperforms baselines across several deep RL tasks.
Primary Area: reinforcement learning
Submission Number: 7688
Loading