Emulating Perceptual Development in Deep Reinforcement Learning

Emir Arditi, Yukie Nagai, Emre Ugur, Minoru Asada, Erhan Öztop

Published: 2025, Last Modified: 04 Apr 2026ICDL 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The process of learning in infants differs from the traditional reinforcement learning (RL) methods in several aspects. The biggest difference is that RL assumes a stationary world and an agent with fixed sensory and motor abilities. In contrast, infant development proceeds by unfolding new perceptual and motor abilities in parallel to learning. In spite of the general notion that this staged learning leads for faster and better learning in biological systems, it is not clear how such a learning mechanism can be embedded into a reinforcement learning scenario. In this study, towards this direction, we explored how an emulated perceptual development (EPD) in an RL setting can benefit the learning. As a test bed, we took the Pong game and required the RL agent to learn to play against a pre-programmed opponent by using a policy gradient based deep RL method. During learning, inspired by the progressive perceptual development in infants, the state space representation of the RL agent was changed in stages by incorporating additional information about the environment, which largely invalidated the classical RL assumption. By comparing the proposed perceptual development based learning with the performance of baseline learners, we assessed whether the benefits of developmental learning could be transferred to deep reinforcement learning systems. The results obtained suggest that a suitable perceptual development regime may improve the learning progress and yield better performing agents.
Loading