Keywords: Homeostasis, Deep Reinforcement Learning, Reward, Homeostatic Reinforcement Learning, Computational Neuroscience
TL;DR: By applying a reward definition inspired by computational neuroscience to the internal state of the agent's body and training the agent with deep RL, we can realize a quadruped agent that performs homeostatic behaviour with continuous control.
Abstract: In this work, we propose a neural homeostat, a neural machine that stabilises the internal physiological state through interactions with the environment. Based on this framework, we demonstrate that behavioural homeostasis with low-level continuous motor control emerges from an embodied agent using only rewards computed by the agent's local information. Using the bodily state of the embodied agent as the reward source, the complexity of the reward definition is `outsourced' into the coupled dynamics of the bodily state and the environment. Therefore, our definition of the reward is simple, but the optimised behaviour of the agent can be surprisingly complex. Our contributions are 1) an extension of homeostatic reinforcement learning to enable continuous motor control using deep reinforcement learning; 2) a comparison of homeostatic reward definitions from previous studies, where we found that homeostatic rewards using the difference of the drive function performed best; and 3) a demonstration of the emergence of adaptive behaviour from low-level motor control through direct optimisation of the homeostatic objective.
Supplementary Material: zip