Keywords: representation learning, geometry, graphical models, causality
Abstract: Learning meaningful causal representations from observations has emerged as a crucial task for facilitating machine learning applications and driving scientific discoveries in fields such as climate science, biology, and physics. This process involves disentangling high-level latent variables and their causal relationships from low-level observations. Previous work in this area that achieves identifiability typically focuses on cases where the observations are either i.i.d. or follow a latent discrete-time process. Nevertheless, many real-world settings require the identification of latent variables that are stochastic processes (e.g., a multivariate point process). To this end, we develop identifiable causal representation learning for continuous-time latent stochastic point processes. We study the theoretical identifiability by analyzing the geometry of the parameter space. Furthermore, based on this, we develop MUTATE, a variational autoencoder framework with a time-adaptive transition module to evaluate stochastic dynamics. Across simulated and empirical studies, we find that MUTATE has the potential to answer questions in numerous scientific fields.
Primary Area: causal reasoning
Submission Number: 535
Loading