Use of External Markers by Reactive Agents as an Easier Evolutionary Route Toward Memory

Published: 01 Jan 2024, Last Modified: 16 May 2025IJCNN 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Memory is a key functional requirement for cognitive agents. There are three basic ways to implement memory using neural networks: (1) RNN: recurrent neural networks, (2) TDNN: time-delayed neural networks (feed-forward), and (3) DROPPER: external marker dropper/detector (feed-forward). All three have been found to be effective in prior research. In this paper, we ask which of these mechanisms could have evolved earlier/easier? To answer this question, we set up a simple ball-catching task where two balls fall from above at different speeds, and an agent at the bottom has to catch the balls using range sensors. Depending on the relative speed of the balls, sometimes the slow ball will go out of sensor range, thus to catch the fast ball first then remember to catch the second (slow) ball, memory is required. We used the Neuroevolution of Augmenting Topologies (NEAT) algorithm to evolve all three types of memory mechanisms, where not only the connection weights but also the network topologies are evolved. Our results show that the DROPPER mechanism is the fastest to evolve a successful controller, followed by TDNN and RNN. Among the feed-forward topologies, we also found that DROPPER is more robust than TDNN (less sensitive to the relative speed of the balls). These results show that a simple reactive agent could quickly evolve a rudimentary form of memory through depositing and detecting external markers, long before other internalized memory mechanisms evolve. These findings shed light on the evolutionary route toward memory in cognitive agents.
Loading