Knowledge-Augmented Language Models for Cause-Effect Relation ClassificationDownload PDF

10 Mar 2022, 22:17 (modified: 13 Apr 2022, 08:44)ACL 2022 Workshop CSRRReaders: Everyone
Keywords: commonsense causal reasoning, cause-effect relation classification, pretrained language models
Abstract: Previous studies have shown the efficacy of knowledge augmentation methods in pretrained language models. However, these methods behave differently across domains and downstream tasks. In this work, we investigate the augmentation of pretrained language models with knowledge graph data in the cause-effect relation classification and commonsense causal reasoning tasks. After automatically verbalizing triples in ATOMIC2020, a wide coverage commonsense reasoning knowledge graph, we continually pretrain BERT and evaluate the resulting model on cause-effect pair classification and answering commonsense causal reasoning questions. Our results show that a continually pretrained language model augmented with commonsense reasoning knowledge outperforms our baselines on two commonsense causal reasoning benchmarks, COPA and BCOPA-CE, and a Temporal and Causal Reasoning (TCR) dataset, without additional improvement in model architecture or using quality-enhanced data for fine-tuning.
Published: No
Archival: Yes
4 Replies