Abstract: Current causal discovery methods using Large Language Models (LLMs) often rely on pairwise or iterative strategies, which fail to capture global dependencies, amplify local biases, and reduce overall accuracy. This work introduces a unified framework for one-step full causal graph discovery through: (1) \textbf{Prompt-based discovery} with in-context learning when node metadata is available, and (2) \textbf{Causal\_llm}, a data-driven method for settings without metadata.
Empirical results demonstrate that the prompt-based approach outperforms state-of-the-art models (GranDAG, GES, ICA-LiNGAM) by approximately 40\% in edge accuracy on datasets like Asia and Sachs, while maintaining strong performance on more complex graphs (ALARM, HEPAR2). Causal\_llm consistently excels across all benchmarks, achieving 50\% faster inference than reinforcement learning-based methods and improving precision by 25\% in fairness-sensitive domains such as legal decision-making.
We also introduce two domain-specific DAGs—one for bias propagation and another for legal reasoning under the Bhartiya Nyaya Sanhita—demonstrating LLMs' capability for systemic, real-world causal discovery.
Paper Type: Long
Research Area: Generation
Research Area Keywords: Model Understanding, Explainability, Interpretability, and Trust
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English
Submission Number: 2130
Loading