States as goal-directed concepts: an epistemic approach to state-representation learning

Published: 27 Oct 2023, Last Modified: 27 Nov 2023InfoCog@NeurIPS2023 OralEveryoneRevisionsBibTeX
Keywords: state representation, goal directed learning, Sanov's theorem
TL;DR: States only make sense in light of goals
Abstract: Our goals fundamentally shape how we experience the world. For example, when we are hungry, we tend to view objects in our environment according to whether or not they are edible (or tasty). Alternatively, when we are cold, we may view the very same objects according to their ability to produce heat. Computational theories of learning in cognitive systems, such as reinforcement learning, use the notion of "state-representation" to describe how agents decide which features of their environment are behaviorally-relevant and which can be ignored. However, these approaches typically assume "ground-truth" state representations that are known by the agent, and reward functions that need to be learned. Here we suggest an alternative approach in which state-representations are not assumed veridical, or even pre-defined, but rather emerge from the agent's goals through interaction with its environment. We illustrate this novel perspective by inferring the goals driving rat behavior in an odor-guided choice task and discuss its implications for developing, from first principles, an information-theoretic account of goal-directed state representation learning and behavior.
Submission Number: 29
Loading