CIKQA: Learning Commonsense Inference with a Unified Knowledge-in-the-loop QA ParadigmDownload PDF

Published: 28 Mar 2022, Last Modified: 05 May 2023ACL 2022 Workshop CSRRReaders: Everyone
Abstract: Recently, the community has achieved substantial progress on many commonsense reasoning benchmarks. However, it is still unclear what was learned from the training process: the knowledge, how to do inference, or both? We argue that due to the large scale of commonsense knowledge, it is infeasible to annotate a large enough training set for each task to cover all commonsense for learning. Thus we should separate the commonsense knowledge acquisition and inference over commonsense knowledge as two separate tasks. In this work, we focus on investigating models' commonsense inference capabilities from two perspectives: (1) Whether models can know if the knowledge they have is enough to solve the task; (2) Whether models can learn commonsense inference capabilities, that generalize across commonsense tasks. We first align commonsense tasks with relevant knowledge from commonsense knowledge bases and ask humans to annotate whether the knowledge is enough or not. Then, we convert different commonsense tasks into a unified question answering format to evaluate models' generalization capabilities. We name the benchmark as Commonsense Inference with knowledge-in-the-loop Question Answering (CIKQA).
Published: No
Archival: No
4 Replies

Loading