Path Channels and Plan Extension Kernels: a Mechanistic Description of Planning in a Sokoban RNN

ICLR 2026 Conference Submission21826 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: mechanistic interpretability, reinforcement learning, sokoban
TL;DR: Finding a more precise and causal representation of the plan lets us figure out how it gets constructed in an RNN that plays Sokoban
Abstract: We partially reverse-engineer a convolutional recurrent neural network (RNN) trained with model-free reinforcement learning to play the box-pushing game Sokoban. We find that the RNN stores future moves (plans) as activations in particular channels of the hidden state, which we call *path channels*. A high activation in a particular location means that, when a box is in that location, it will get pushed in the channel's assigned direction. We examine the convolutional kernels between path channels and find that they encode the change in position resulting from each possible action, thus representing part of a learned *transition model*. The RNN constructs plans by starting at the boxes and goals. These kernels, *extend* activations in path channels forwards from boxes and backwards from the goal. Negative values are placed in channels at obstacles. This causes the extension kernels to propagate the negative value in reverse, thus pruning the last few steps and letting an alternative plan emerge; a form of backtracking. Our work shows that, a precise understanding of the plan representation allows us to directly understand the bidirectional planning-like algorithm learned by model-free training in more familiar terms.
Primary Area: interpretability and explainable AI
Submission Number: 21826
Loading