Towards Concept-Aware Language ModelsDownload PDF

Anonymous

17 Apr 2023ACL ARR 2023 April Blind SubmissionReaders: Everyone
Abstract: Concepts play a pivotal role in various human cognitive abilities. However, there has been relatively little work on endowing machines with the ability to form and reason with concepts. In particular, pretrained language models (LMs) work at the level of \emph{words}, not concepts. This is problematic as different words relating to the same concept compete for probability mass. Here we take the first step towards developing a concept-aware LM. Instead of rethinking the training process, we adapt existing LMs. We build a proof-of-concept LM outputting a ranked list of concepts, and show that they are relatively coherent and diverse. We demonstrate that concepts could help improve the LM's ranking and robustness. While this work is rather preliminary, we believe concept-aware LM can benefit many downstream tasks.
Paper Type: short
Research Area: Linguistic theories, Cognitive Modeling and Psycholinguistics
0 Replies

Loading