Abstract: Understanding probabilistic dependencies among variables is central to analyzing complex systems. Traditional structure learning methods often require extensive observational data or are limited by manual, error-prone incorporation of expert knowledge. Recent studies have explored using large language models (LLMs) for structure learning, but most treat LLMs as auxiliary tools for pre-processing or post-processing, leaving the core learning process data-driven. In this work, we introduce a unified framework for Bayesian network structure discovery that places LLMs at the center, supporting both data-free and data-aware settings. In the data-free regime, we introduce \textbf{PromptBN}, which leverages LLM reasoning over variable metadata to generate a complete directed acyclic graph (DAG) in a single call. PromptBN effectively enforces global consistency and acyclicity through dual validation, achieving constant $\mathcal{O}(1)$ query complexity. When observational data are available, we introduce \textbf{ReActBN} to further refine the initial graph. ReActBN combines statistical evidence with LLM by integrating a novel ReAct-style reasoning with configurable structure scores (e.g., Bayesian Information Criterion). Experiments demonstrate that our method outperforms prior data-only, LLM-only, and hybrid baselines, particularly in low- or no-data regimes and on out-of-distribution datasets.
Submission Type: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: 1. We made a short modification in the second paragraph in Section 1.
2. We improved the writing in Section 4.2.
3. We fixed the minor issues mentioned by Reviewer 5trM, including adding a missing period and correcting the number of overperforming datasets (7 -> 6).
4. We followed the suggestion in Reviewer 5trM's first comment and increased the font size in Figure 1.
5. We examined the full content and fixed several citation formattings.
Code: https://github.com/sherryzyh/llmbn
Assigned Action Editor: ~Renjie_Liao1
Submission Number: 6373
Loading