Generative Intrinsic Optimization: Intrinsic Control with Model Learning

Published: 20 Oct 2023, Last Modified: 30 Nov 2023IMOL@NeurIPS2023EveryoneRevisionsBibTeX
Keywords: Intrinsic Motivation, Mutual Information, Model Learning
TL;DR: A unified learning framework that integrates intrinsic control with model learning.
Abstract: Future sequence represents the outcome after executing the action into the environment (i.e. the trajectory onwards). When driven by the information-theoretic concept of mutual information, it seeks maximally informative consequences. Explicit outcomes may vary across state, return, or trajectory serving different purposes such as credit assignment or imitation learning. However, the inherent nature of incorporating intrinsic motivation with reward maximization is often neglected. In this work, we propose a policy iteration scheme that seamlessly incorporates the mutual information, ensuring convergence to the optimal policy. Concurrently, a variational approach is introduced, which jointly learns the necessary quantity for estimating the mutual information and the dynamics model, providing a general framework for incorporating different forms of outcomes of interest. While we mainly focus on theoretical analysis, our approach opens the possibilities of leveraging intrinsic control with model learning to enhance sample efficiency and incorporate uncertainty of the environment into decision-making.
Submission Number: 17
Loading