Keywords: commonsense reasoning, latent-variable model, explanations, generative modeling
TL;DR: We propose a latent-variable model for producing explicit reasoning steps for multiple-choice QA that requires commonsense reasoning.
Abstract: Commonsense reasoning is an important capability for a range of AI applications such as text understanding. Neural models for commonsense reasoning QA often directly predict answers based on learned representations of language. In this work, we consider the challenge of producing an explicit reasoning step for a commonsense QA system. We propose a latent-variable model that identifies what type of knowledge from an external knowledge base may be relevant to answering the question, computes the commonsense inferences, and predicts the answer. Our method can therefore learn to provide posterior rationales for why a certain answer was chosen. Experimental results show that the model can identify the correct reasoning step in twice as many examples compared to an existing unsupervised approach for producing explanations, while still maintaining comparable accuracy to end-to-end pretrained models.