Analyzing the Contribution of Commonsense Knowledge Sources for Why-Question AnsweringDownload PDF

Published: 28 Mar 2022, Last Modified: 05 May 2023ACL 2022 Workshop CSRRReaders: Everyone
Keywords: commonsense knowledge, why questions, question answering, commonsense injection
TL;DR: External KBs help answer why questions but it is hard to extract the relevant information
Abstract: Answering questions about why events happen in narratives requires commonsense knowledge that is external to the narrative. What aspects of this knowledge is accessible to large models? What aspects can be made accessible via external commonsense resources? We study these in the context of answering Why questions in the TellMeWhy dataset using COMET as a source of relevant commonsense relations. We analyze the relative improvements over a base T5 model when (a) increasing the model size, (b) injecting knowledge from COMET as part of the task input, and (c) asking the model to generate COMET relation type as an explanation in addition to its answer. Results show that the larger model, as expected, yields substantial improvements over the base. Interestingly, we find that the question specific COMET relations can provide substantial improvements for both base and large models, with additional possible gains when asking the model to also generate COMET relation type. So, we augment a large model with noisy hints from COMET and find that this improves performance on the $TellMeWhy$ task. We also develop a simple ontology of knowledge types and analyze the relative coverage of the different models on these categories. Together, these findings suggest potential for methods that can automatically select and inject commonsense from relevant sources.
Published: No
Archival: No
4 Replies

Loading