Realtime Reinforcement Learning: Towards Rapid Asynchronous Deployment of Large Models

Published: 19 Jun 2024, Last Modified: 03 Oct 2024ARLET 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Realtime Environments, Asynchronous Algorithms, Time Discretization, Real World Deployment, Deep Reinforcement Learning
Abstract: Realtime environments change even as agents perform action inference and learning, thus requiring high interaction frequencies to effectively minimize long-term regret. However, recent advances in machine learning involve larger neural networks with longer inference times, raising questions about their applicability in realtime systems where quick reactions are crucial. We present an analysis of lower bounds on regret in realtime environments to show that minimizing long-term regret is generally impossible within the typical sequential interaction and learning paradigm, but often becomes possible when sufficient asynchronous compute is available. We propose novel algorithms for staggering asynchronous inference processes to ensure that actions are taken at consistent time intervals, and demonstrate that use of models with high action inference times is only constrained by the environment's effective stochasticity over the inference horizon, and not by action frequency. Our analysis shows that the number of inference and learning processes needed scales linearly with increasing inference times while enabling use of models that are multiple orders of magnitude larger than existing approaches when learning from a realtime simulation of Game Boy games such as Pokemon and Tetris.
Submission Number: 53
Loading