AI in a vat: Fundamental limits of efficient world modelling for agent sandboxing and interpretability
Keywords: World models, agent sandboxing, POMDPs, AI interpretability, AI safety
TL;DR: This paper maps some of the fundamental trade-offs when building the world for an AI agent that needs to be tested before deployment.
Abstract: Recent work proposes to use world models as controlled virtual environments in which AI agents can be tested before deployment to ensure their reliability and safety. However, accurate world models often have high computational demands that can severely restrict the scope and depth of such assessments. Inspired by the classic ‘brain in a vat’ thought experiment, here we investigate ways of simplifying world models that remain agnostic to the AI agent under evaluation. By following principles from computational mechanics, our approach reveals a fundamental trade-off in world model construction between efficiency and interpretability, demonstrating that no single world model can optimise all desirable characteristics. Building on this trade-off, we identify procedures to build world models that either minimise memory requirements, delineate the boundaries of what is learnable, or allow tracking causes of undesirable outcomes. In doing so, this work establishes fundamental limits in world modelling, leading to actionable guidelines that inform core design choices related to effective agent evaluation.
Publication Agreement Form: pdf
Submission Number: 367
Loading