Testing causal hypotheses through Hierarchical Reinforcement Learning

Published: 09 Oct 2024, Last Modified: 02 Dec 2024NeurIPS 2024 Workshop IMOL asTinyPaperPosterEveryoneRevisionsBibTeXCC BY 4.0
Track: Tiny paper track
Keywords: hierarchical RL, agent as scientist, exploration, causality, children
TL;DR: Proposes using Hierarchical Reinforcement Learning to develop AI agents that generate and test hypotheses, representing causal relationships with Structural Causal Models. Combines MDPs with SCMs for adaptable learning in open-ended environments.
Abstract: A goal of AI research is to develop agentic systems capable of operating in open-ended environments with the autonomy and adaptability akin to a scientist in the world of research---generating hypothesis, empirically testing them, and drawing conclusions about how the world works. We propose Structural Causal Models (SCMs) as a formalization of the space of hypothesis, and hierarchical reinforcement learning (HRL) as a key ingredient to building agents that can systematically discover the correct SCM. This provides a framework towards constructing agent behavior that generates and tests hypothesis to enables transferable learning of the world. Finally, we discuss practical implementation strategies.
Submission Number: 53
Loading