Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Program Induction to Interpret Transition Systems
Svetlin Penkov, Subramanian Ramamoorthy
Jun 15, 2017 (modified: Jun 19, 2017)ICML 2017 WHI Submissionreaders: everyone
Abstract:Explaining and reasoning about processes which underlie observed black-box phenomena enables the discovery of causal mechanisms, derivation of suitable abstract representations and the formulation of more robust predictions. We propose to learn high level functional programs in order to represent abstract models which capture the invariant structure in the observed data. We introduce the π-machine (program-induction machine) -- an architecture able to induce interpretable LISP-like programs from observed data traces. We propose an optimisation procedure for program learning based on backpropagation, gradient descent and A* search. We apply the proposed method to two problems: system identification of dynamical systems and explaining the behaviour of a DQN agent. Our results show that the π-machine can efficiently induce interpretable programs from individual data traces.
TL;DR:Induce programs from single observation traces in order to explain the black-box process generating the data.
Keywords:program induction, interpretability
Enter your feedback below and we'll get back to you as soon as possible.