Causal Emergent Representation Learning Under Distribution Shift in Critical Care Time Series

Published: 23 Sept 2025, Last Modified: 18 Oct 2025TS4H NeurIPS 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: time-series, emergence, critical care
TL;DR: Causal emergent representations improve OOD generalization across clinical time series.
Abstract: Understanding the internal processes of deep learning models has become a central challenge and causal representation learning offers a promising framework for their interpretation. We investigate how a neural network can learn to capture high-level, ``emergent'' causal abstractions from complex clinical time series. We introduce a novel conceptual framework distinguishing between perceived emergence, defined as a model's ability to identify emergent patterns within its familiar training environment, and true emergence, defined as a model's ability to preserve this abstraction on novel, out-of-distribution data. This framework is evaluated by reciprocal training and verification experiments on two large critical care time series datasets using an information-theoretic objective that serves as a strong inductive bias for learning emergent causal structure. Our results show that the models capture perceived emergence within their training environments and also demonstrate true emergence across datasets, indicating robust, causally invariant generalization. We provide an account of this generalization by analyzing the internal mechanics of the learned representations and the stability of mutual information of input features under distributional shift, thereby contributing to a clearer understanding of how such models may achieve out-of-distribution generalization in clinical settings.
Submission Number: 102
Loading