Weakly-supervised explainable question answering via question aware contrastive learning and adaptive gate mechanism
Abstract: Explainable question answering (XQA) aims to answer a given question as well as provide an explanation. Existing approaches usually give explanatory text with saliency mapping or supervised learning, which still face two challenges: (i) relying on numerous expert-annotated evidence for training, and (ii) low coupling between answers and evidence. In this paper, we propose an XQA method in a weakly supervised manner, namely WSQG, which automatically captures effective remotely supervised signals and mines evidence coupled with answers. Specifically, we propose a multi-granularity evidence distillation method based on a growth-clipping strategy, which obtains the evidence pseudo-labels trade-off informativeness and conciseness. Then, we devise a question-aware contrastive learning objective to enhance the coupling of answers and explanations in fine-tuning, which encourages the model to maximize the similarity of the question and potential evidence from the semantic representation level. In addition, we propose an adaptive selection gate mechanism to reduce the noise introduced by weakly supervised signals for model training. We construct experiments on two unsupervised XQA benchmarks, and the results show that WSQG not only improves the model performance (maximum improvement of 7.6%) but also exhibits better answer-explanation coupling (maximum improvement of 16.2%) compared to the baseline methods.
Loading