Continuously Updating Digital Twins using Large Language Models

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Digital twins are models of real-world systems that can simulate their dynamics in response to potential actions. In complex settings, the state and action variables, and available data and knowledge relevant to a system can constantly change, requiring digital twins to continuously update with these changes to remain relevant. Current approaches struggle in this regard, as they require fixed, well-defined modelling environments, and they cannot adapt to novel variables without re-designs, or incorporate new information without re-training. To address this, we frame digital twinning as an in-context learning problem using large language models, enabling seamless updates to the twin at inference time. We develop CALM-DT, a Context-Adaptive Language Model-based Digital Twin that can accurately simulate across diverse state-action spaces using in-context learning alone by utilising fine-tuned encoders for sample retrieval. We empirically demonstrate CALM-DT's competitive performance with existing digital twin approaches, and its unique ability to adapt to changes in its modelling environment without parameter updates.
Lay Summary: We tackle a problem with "digital twins" – which are computational models of real-world systems (e.g., a cell, a medical patient, a city) that can be used to simulate different scenarios for decision making purposes. Current digital twins lose relevance to their counterpart physical system when real-world conditions change, like when a new medical treatment becomes available, or when new, informative data is released about patients with a certain disease, and they can require extensive re-designs and re-training to get up to date again. We show how large language models (LLMs) can act as accurate digital twins that overcome this limitation, as an LLM-based twin can easily incorporate new variables or information on the fly using purely natural language prompting. Our method, called CALM-DT, uses neural network encoders to select the data most relevant to the target system that we want to twin, before prompting an LLM to simulate the target system forward in time, using insights derived from the provided relevant data. We show that CALM-DT matches or beats existing digital twin approaches in terms of simulation accuracy, while being much easier to adapt when changes occur to the real-world system. This promises to increase the feasibiliy of digital twin deployment in complex environments, where changes can frequently occur.
Primary Area: Deep Learning->Sequential Models, Time series
Keywords: Digital twin, simulation, dynamical systems, LLM, in-context learning
Submission Number: 3536
Loading