How to Query Language Models?Download PDF

Anonymous

16 May 2021 (modified: 05 May 2023)ACL ARR 2021 May Blind SubmissionReaders: Everyone
Abstract: Large pre-trained language models (LMs) are capable of not only recovering linguistic but also factual and commonsense knowledge. To access the knowledge stored in mask-based LMs, we can use cloze-style questions and let the model fill in the blank. The flexibility advantage over structured knowledge bases comes with the drawback of finding the right query for a certain information need. Inspired by human behavior to disambiguate a question, we propose to query LMs by example. To clarify the ambivalent question \textit{Who does Neuer play for?}, a successful strategy is to demonstrate the relation using another subject, e.g., \textit{Ronaldo plays for Portugal. Who does Neuer play for?}. We apply this approach of querying by example to the LAMA probe and obtain substantial improvements of up to 37.8\% for BERT-large on the T-REx data when providing only 10 demonstrations---even outperforming a baseline that queries the model with up to 40 paraphrases of the question. The examples are provided through the model's context and thus require neither fine-tuning nor an additional forward pass. This suggests that LMs contain more factual and commonsense knowledge than previously assumed---if we query the model in the right way.
Software: zip
0 Replies

Loading