Toward Universal and Interpretable World Models for Open-ended Learning Agents

Published: 09 Oct 2024, Last Modified: 02 Dec 2024NeurIPS 2024 Workshop IMOL asTinyPaperPosterEveryoneRevisionsBibTeXCC BY 4.0
Track: Tiny paper track
Keywords: Bayesian, world model, representation learning, agent, biomimetic
TL;DR: We introduce a generic, compositional and interpretable class of generative world models that supports open-ended learning agents.
Abstract: We introduce a generic, compositional and interpretable class of generative world models that supports open-ended learning agents. This is a sparse class of Bayesian networks capable of approximating a broad range of stochastic processes, which provide agents with the ability to learn world models in a manner that may be both interpretable and computationally scalable. This approach integrating Bayesian structure learning and intrinsically motivated (model-based) planning enables agents to actively develop and refine their world models, which may lead to developmental learning and more robust, adaptive behavior.
Submission Number: 8
Loading