everyone
since 04 Oct 2024">EveryoneRevisionsBibTeXCC BY 4.0
Despite the significant advancements in Deep Reinforcement Learning (RL) observed in the last decade, the amount of training experience necessary to learn effective policies remains one of the primary concerns both in simulated and real environments. Looking to solve this issue, previous work has shown that improved training efficiency can be achieved by separately modeling agent and environment, but usually requiring a supervisory agent mask. In contrast to RL, humans can perfect a new skill from a very small number of trials and in most cases do so without a supervisory signal, making neuroscientific studies of human development a valuable source of inspiration for RL. In particular, we explore the idea of motor prediction, which states that humans develop an internal model of themselves and of the consequences that their motor commands have on the immediate sensory inputs. Our insight is that the movement of the agent provides a cue that allows the duality between agent and environment to be learned. To instantiate this idea, we present Ego-Foresight, a self supervised method for disentangling agent and environment based on motion and prediction. Our main finding is that visuomotor prediction of the agent provides good feature representations for the underlying RL algorithm. To test our approach, we integrate Ego-Foresight with a model-free RL algorithm to solve simulated robotic manipulation tasks, showing its ability to improve efficiency and performance in different tasks while making strides towards real-world RL applications, by removing the need for costly supervisory signals.