Novelty Detection in Reinforcement Learning with World Models

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 spotlightposterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: Designing straightforward techniques to improve deployment safeguards for RL systems
Abstract: Reinforcement learning (RL) using world models has found significant recent successes. However, when a sudden change to world mechanics or properties occurs then agent performance and reliability can dramatically decline. We refer to the sudden change in visual properties or state transitions as novelties. Implementing novelty detection within generated world model frameworks is a crucial task for protecting the agent when deployed. In this paper, we propose straightforward bounding approaches to incorporate novelty detection into world model RL agents by utilizing the misalignment of the world model's hallucinated states and the true observed states as a novelty score. We provide effective approaches to detecting novelties in a distribution of transitions learned by an agent in a world model. Finally, we show the advantage of our work in a novel environment compared to traditional machine learning novelty detection methods as well as currently accepted RL-focused novelty detection algorithms.
Lay Summary: Reinforcement learning (RL) agents are becoming more capable by using internal models—called world models—to predict and understand their environments. However, when something in the environment suddenly changes, like its appearance or how it behaves, these agents can perform poorly. Such unexpected changes are known as novelties. To help RL agents handle these surprises, we propose a simple and effective method for novelty detection. Our approach compares what the agent expects to happen (based on its internal world model) with what actually happens in the real world. The greater the mismatch, the more likely it is that something novel has occurred. We test our method in a variety of virtual environments, including MiniGrid, Atari games, and the DeepMind Control Suite. To simulate real-world surprises, we introduce changes using modified versions of these environments, such as NovGrid, HackAtari, and the RealWorldRL Suite. These allow us to evaluate how well our method detects unexpected changes. Our results show that our approach outperforms both traditional machine learning novelty detection methods and those specifically designed for RL, making it a promising tool for building more reliable and adaptable agents.
Primary Area: Reinforcement Learning->Online
Keywords: Anomaly Detection, Safety Mechanisms
Submission Number: 7420
Loading