Neurobehavior of exploring AI agents

Published: 20 Oct 2023, Last Modified: 30 Nov 2023IMOL@NeurIPS2023EveryoneRevisionsBibTeX
Keywords: exploration, intrinsic motivation, neuroAI, animal-inspiration, interpretability
TL;DR: Studying AI exploration in animal-inspired settings reveals opportunities for improving agent performance, and insight into learned neural dynamics.
Abstract: We study intrinsically motivated exploration by artificially intelligent (AI) agents in animal-inspired settings. We construct virtual environments that are 3D, vision-based, physics-simulated, and based on two established animal assays: labyrinth exploration, and novel object interaction. We assess Plan2Explore (P2E), a leading model-based, intrinsically motivated deep reinforcement learning agent, in these environments. We characterize and compare the behavior of the AI agents to animal behavior, using measures devised for animal neuroethology. P2E exhibits some similarities to animal behavior, but is dramatically less efficient than mice at labyrinth exploration. We further characterize the neural dynamics associated with world modeling in the novel-object assay. We identify latent neural population activity axes linearly associated with representing object proximity. These results identify areas of improvement for existing AI agents, and make strides toward understanding the learned neural dynamics that guide their behavior.
Submission Number: 14
Loading