Do Language Models Perform Generalizable Commonsense Inference?Download PDF

Published: 18 Sept 2021, Last Modified: 15 Sept 2024CSKBReaders: Everyone
Keywords: Commonsense Knowledge, language models
TL;DR: An analysis work investigating the generalization of language models for commonsense inference.
Abstract: Inspired by evidence that pretrained language models (LMs) encode commonsense knowledge, recent work has applied LMs to automatically populate commonsense knowledge graphs (CKGs). However, there is a lack of understanding on their generalization to multiple CKGs, unseen relations, and novel entities. This paper analyzes the ability of LMs to perform generalizable commonsense inference, in terms of knowledge capacity, transferability, and induction. Our experiments with these three aspects show that: (1) LMs can adapt to different schemas defined by multiple CKGs but fail to reuse the knowledge to generalize to new relations. (2) Adapted LMs generalize well to unseen subjects, but less so on novel objects. Future work should investigate how to improve the transferability and induction of commonsense mining from LMs.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/do-language-models-perform-generalizable/code)
1 Reply

Loading