Continual learning and refinement of causal models through dynamic predicate invention

Published: 01 Mar 2026, Last Modified: 09 Mar 2026UCRL@ICLR2026 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Inductive Logic Programming, World model, Symbolic AI
TL;DR: Constructing a symbolic causal world model by integrating continuous model learning and repair, by leveraging the power of predicate invention to find a hierarchy of semantically meaningful, disentangled concepts from its observations.
Abstract: Efficiently navigating complex environments requires agents to internalize the underlying logic of their world, yet standard world modelling methods often struggle with sample inefficiency, lack of transparency, and poor scalability. We propose a framework for constructing symbolic causal world models entirely online by integrating continuous model learning and repair into the agent's decision loop, by leveraging the power of Meta-Interpretive Learning and predicate invention to find semantically meaningful and reusable abstractions, allowing an agent to construct a hierarchy of disentangled, high-quality concepts from its observations. We demonstrate that our lifted inference approach scales to domains with complex relational dynamics, where propositional methods suffer from combinatorial explosion, while achieving sample-efficiency orders of magnitude higher than the established PPO neural-network-based baseline.
Submission Number: 24
Loading