Explanations for CommonsenseQA: New Dataset and ModelsDownload PDF

Sep 03, 2021CSKBReaders: Everyone
  • Keywords: ECQA, CommonsenseQA, Explanations, Dataset, CQA
  • TL;DR: A paper releasing the Explanations for CommonsenseQA dataset, along with the desiderata for an explanation dataset in common-sense domain, and some initial retrieval as well as generation model trained on that dataset.
  • Abstract: CommonsenseQA (CQA) (Talmor et al., 2019) dataset was recently released to advance the research on common-sense question answering (QA) task. Whereas the prior work has mostly focused on proposing QA models for this dataset, our aim is to retrieve as well as generate explanation for a given (question, correct answer choice, incorrect answer choices) tuple from this dataset. Our explanation definition is based on certain desiderata, and translates an explanation into a set of positive and negative common-sense properties (aka facts) which not only explain the correct answer choice but also refute the incorrect ones. We human-annotate a first-of-its-kind dataset (called ECQA) of positive and negative properties, as well as free-flow explanations, for $11K$ QA pairs taken from the CQA dataset. We propose a latent representation based property retrieval model as well as a GPT-2 based property generation model with a novel two step fine-tuning procedure. We also propose a free-flow explanation generation model. Extensive experiments show that our retrieval model beats BM25 baseline by a relative gain of 100% in $F_1$ score, property generation model achieves a respectable $F_1$ score of 36.4, and free-flow generation model achieves a similarity score of 61.9, where last two scores are based on a human correlated semantic similarity metric.
1 Reply

Loading