Keywords: autonomous robotics, safety prediction, world models, foundation models
Abstract: A world model creates a surrogate world to train a controller and predict safety violations by learning the internal dynamic model of systems.
However, the existing world models rely solely on statistical learning of how observations change in response to actions, lacking precise quantification of how accurate the surrogate dynamics are, which poses a significant challenge in safety-critical systems.
To address this challenge, we propose foundation world models that embed observations into meaningful and interpretable latent representations. This enables the surrogate dynamics to directly predict interpretable future states by leveraging a training-free large language model. In two common benchmarks, this novel model outperforms standard world models in the safety prediction task and has a performance comparable to supervised learning despite not using any data. We evaluate its performance with a more specialized and system-relevant metric by comparing estimated states instead of aggregating observation-wide error.
Submission Number: 12
Loading