Keywords: causal discovery, latent confounding, deconfounding, optimization framework, identifiability, Gaussian noise, directed acyclic graphs, structure learning, differentiable causal discovery, graphical models, machine learning
TL;DR: DECOR jointly learns a DAG and correlated noise in linear Gaussian SEMs with latent confounding, uses bow-free plus eigenvalue-margin conditions for identifiability, and outperforms baselines.
Abstract: We study structure learning for linear Gaussian SEMs in the presence of latent confounding. Existing continuous methods excel when errors are independent, while deconfounding-first pipelines rely on pervasive factor structure or nonlinearity. We propose **DECOR**, a single likelihood-based and fully differentiable estimator that jointly learns a DAG and a correlated noise model. Our theory gives simple sufficient conditions for global parameter identifiability: if the mixed graph is bow free and the noise covariance has a uniform eigenvalue margin, then the map from $(B,\Omega)$ to the observational covariance is injective, so both the directed structure and the noise are uniquely determined. The estimator alternates a smooth-acyclic graph update with a convex noise update and can include a light bow complementarity penalty or a post hoc reconciliation step. On synthetic benchmarks that vary confounding density, graph density, latent rank, and dimension with $n<p$, DECOR matches or outperforms strong baselines and is especially robust when confounding is non-pervasive, while remaining competitive under pervasiveness.
Supplementary Material: pdf
Primary Area: causal reasoning
Submission Number: 22961
Loading