On Incorporating Prior Knowledge Extracted from Pre-trained Language Models into Causal DiscoveryDownload PDF

Anonymous

16 Feb 2024 (modified: 04 Nov 2024)ACL ARR 2024 February Blind SubmissionReaders: Everyone
Abstract: Pre-trained Language Models (PLMs) can reason about causality leveraging vast pre-trained knowledge and text descriptions of datasets, proving its effectiveness even when data is scarce. However, there are crucial limitations in the current PLM-based causal reasoning methods: i) PLM cannot utilize large datasets in prompt due to the limits of context length, and ii) the methods are not adept at comprehending the whole interconnected causal structures. On the other hand, data-driven causal discovery can discover the causal structure as a whole, although it works well only when the number of data observations is large enough. To overcome each other’s limitations, we propose a new framework that integrates PLMs-based causal reasoning into data-driven causal discovery, which results in more improved and robust performance. Furthermore, our framework extends to the time-series data and exhibited superior performance.
Paper Type: long
Research Area: Interpretability and Analysis of Models for NLP
Contribution Types: Data analysis
Languages Studied: English
0 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview