Uncovering Hidden Correctness in LLM Causal Reasoning via Symbolic Verification

Published: 23 Sept 2025, Last Modified: 21 Oct 2025NeurIPS 2025 Workshop CauScien PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Causality, Verification, LLM, BFS
TL;DR: We provide a better way to evaluate the validity of causal expressions that uncovers missed correctness in LLMs.
Abstract: Large language models (LLMs) are increasingly applied to tasks involving causal reasoning. However, current benchmarks often rely on string matching or surface-level metrics that fail to assess whether a model’s output is formally valid under causal semantics. We propose DoVerifier, a symbolic verification framework that checks whether LLM-generated causal expressions are derivable from a given causal graph using rules from do-calculus and probability theory. This allows us to recover correct answers that would otherwise be marked incorrect due to superficial differences. Evaluations on synthetic data and causal QA benchmarks show that \DoVerifier more accurately captures semantic correctness than standard metrics, offering a more rigorous and informative way to evaluate LLMs on causal tasks.
Submission Number: 39
Loading