Abstract: This paper aims toward an enhancement for automatic causal relation classification from text sources. We introduce a Causal Evidence Graph (CEG), which is a graph-structured representation of lexical evidence of causality extracted automatically from causal texts. We further incorporate the CEG graph into a supervised causal relation classification model by learning the joint representations from the generated CEG graph with the sentence encoding obtained from BERT pre-trained language models. Despite its simplicity, extensive experiments on three types of biomedical and an open-domain datasets show an overall improvement, up to 2.6% and 4.7% F1 score improvements over the state-of-the-art and baseline models, respectively. The results proved the effectiveness of injecting the model directly with lexical causal evidence as features, which might not be explicitly represented by the current pre-trained large language models such as BERT.
Loading