Keywords: deep reinforcement learning, continuous-time, robotics
TL;DR: Reinforcement learning formulation that allows agents to think and act at the same time, demonstrated on real-world robotic grasping.
Abstract: We study reinforcement learning in settings where sampling an action from the policy must be done concurrently with the time evolution of the controlled system, such as when a robot must decide on the next action while still performing the previous action. Much like a person or an animal, the robot must think and move at the same time, deciding on its next action before the previous one has completed. In order to develop an algorithmic framework for such concurrent control problems, we start with a continuous-time formulation of the Bellman equations, and then discretize them in a way that is aware of system delays. We instantiate this new class of approximate dynamic programming methods via a simple architectural extension to existing value-based deep reinforcement learning algorithms. We evaluate our methods on simulated benchmark tasks and a large-scale robotic grasping task where the robot must "think while moving."
Data: [DeepMind Control Suite](https://paperswithcode.com/dataset/deepmind-control-suite)
Original Pdf: pdf
9 Replies
Loading