Goal Discovery with Causal Capacity for Efficient Reinforcement Learning

RSS 2025 Workshop EgoAct Submission1 Authors

25 Apr 2025 (modified: 10 Jun 2025)RSS 2025 Workshop EgoAct SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Reinforcement Learning, Causal Inference
Abstract: Causal inference is crucial for humans to explore the world, which can be modeled to enable an agent to efficiently explore the environment in reinforcement learning. Existing research indicates that establishing the causality between action and state transition will enhance an agent to reason how a policy affects its future trajectory, thereby promoting directed exploration. However, it is challenging to measure the causality due to its intractability in the vast state-action space of complex scenarios. In this paper, we propose a novel $\textbf{G}$oal $\textbf{D}$iscovery with $\textbf{C}$ausal $\textbf{C}$apacity (GDCC) framework for efficient environment exploration. Specifically, we first derive a measurement of causality in state space, i.e., causal capacity, which represents the highest influence of an agent's behavior on future trajectories. After that, we present a Monte Carlo based method to identify critical points in discrete state space and further optimize this method for continuous high-dimensional environments. Those critical points are used to uncover where the agent makes important decisions in the environment, which are then regarded as our subgoals to guide the agent to make exploration more purposefully and efficiently. Empirical results from multi-objective tasks demonstrate that states with high causal capacity align with our expected subgoals, and our GDCC achieves significant success rate improvements compared to baselines.
Submission Number: 1
Loading