Learning Finite State Representations of Recurrent Policy NetworksDownload PDF

Published: 21 Dec 2018, Last Modified: 05 May 2023ICLR 2019 Conference Blind SubmissionReaders: Everyone
Abstract: Recurrent neural networks (RNNs) are an effective representation of control policies for a wide range of reinforcement and imitation learning problems. RNN policies, however, are particularly difficult to explain, understand, and analyze due to their use of continuous-valued memory vectors and observation features. In this paper, we introduce a new technique, Quantized Bottleneck Insertion, to learn finite representations of these vectors and features. The result is a quantized representation of the RNN that can be analyzed to improve our understanding of memory use and general behavior. We present results of this approach on synthetic environments and six Atari games. The resulting finite representations are surprisingly small in some cases, using as few as 3 discrete memory states and 10 observations for a perfect Pong policy. We also show that these finite policy representations lead to improved interpretability.
Keywords: recurrent neural networks, finite state machine, quantization, interpretability, autoencoder, moore machine, reinforcement learning, imitation learning, representation, Atari, Tomita
TL;DR: Extracting a finite state machine from a recurrent neural network via quantization for the purpose of interpretability with experiments on Atari.
13 Replies

Loading