Keywords: world models, reinforcement learning, imitation learning, similarity search
TL;DR: We compare the capabilities of modern neural-networks-based world models with a simple, search-based alternative for dynamics prediction.
Abstract: World Models have vastly permeated the field of Reinforcement Learning. Their ability to model the transition dynamics of an environment have led to tremendous improvements in sample efficiency for online RL. Among them, the most notorious example is Dreamer, a model that learns to act in a diverse set of image-based environments. In this paper, we leverage similarity search and stochastic representations to approximate a world model without a training procedure. We establish a comparison with PlaNet, a well-established world model of the Dreamer family. We evaluate the models on the quality of latent reconstruction and on the perceived similarity of the reconstructed image, on both next-step and long horizon dynamics prediction. The results of our study demonstrate that a search-based world model is comparable to a training based one in both cases. Notably, our model shows stronger performance in long-horizon prediction with respect to the baseline on a range of visually different environments.
Supplementary Material:  zip
Primary Area: Reinforcement learning (e.g., decision and control, planning, hierarchical RL, robotics)
Submission Number: 24550
Loading