Abstract: Augmenting pretrained language model (PLM) with knowledge graph (KG) has demonstrated superior performance for commonsense question answering (CSQA). In the knowledge fusion process, existing KG-augmented methods ignore (i) exploiting the knowledge of PLM and (ii) the supervisory role of PLM. As a result, the noise of KG cannot be filtered effectively in the knowledge fusion process. In this paper, we propose a Conditional Knowledge Fusion method (CKF) (https://github.com/Xie-Minghui/CKF/). to enhance the commonsense reasoning ability of PLM. First, we apply the prompt learning method to exploit the knowledge of PLM which can provide a better semantic supervision signal for the knowledge fusion process. Second, we design a conditional fusion module to filter out the noise of KG. To further improve performance, we design a re-attention mechanism to supplement PLM with commonsense knowledge. Experimental results demonstrate the superior effectiveness of CKF through considerable performance gains across three popular benchmark datasets.
Loading