Causal Emergent Representation Learning Under Distribution Shift in Critical Care Time Series

Published: 23 Sept 2025, Last Modified: 01 Dec 2025TS4H NeurIPS 2025 SpotlightEveryoneRevisionsBibTeXCC BY 4.0
Keywords: time-series, emergence, critical care
TL;DR: Causal emergent representations improve OOD generalization across clinical time series.
Abstract: Understanding the internal processes of deep learning models has become a central challenge, and causal representation learning offers one framework for their interpretation. We investigate how a neural network can learn to capture high‑level “emergent” causal abstractions from complex clinical time series. We introduce a conceptual framework that distinguishes between perceived emergence, defined as a model’s ability to identify emergent patterns within its familiar training environment, and true emergence, defined as a model’s ability to preserve this abstraction on out‑of‑distribution data. We evaluate this framework via reciprocal training and verification experiments on two large critical care time‑series datasets, using an information‑theoretic objective that provides an inductive bias toward learning emergent causal structure. Our results show that the models capture perceived emergence within their training environments and also demonstrate true emergence across datasets, indicating robust, causally invariant generalization. We further examine this behavior by analyzing the internal representations and the stability of feature‑wise mutual information of input variables under distributional shift, contributing to a clearer picture of how such models may achieve out‑of‑distribution generalization in clinical settings.
Submission Number: 102
Loading