ReadPrompt: A Readable Prompting Method for Reliable Knowledge Probing

Published: 07 Oct 2023, Last Modified: 01 Dec 2023EMNLP 2023 FindingsEveryoneRevisionsBibTeX
Submission Type: Regular Long Paper
Submission Track: Language Modeling and Analysis of Language Models
Submission Track 2: Interpretability, Interactivity, and Analysis of Models for NLP
Keywords: Prompt, Pre-trained Language Model, Readability, Knowledge Probing, Fact Retrieval, LAMA Dataset.
TL;DR: We propose a method for automatically searching for word combinations to form sentences as readable prompts, aiming to probe the knowledge embedded in PLMs. Compared to existing methods, our approach offers higher accuracy and more reliable results.
Abstract: Knowledge probing is a task to assess the knowledge encoded within pre-trained language models (PLMs) by having the PLM complete prompts such as "Italy is located in \_\_,". The model's prediction precision serves as a lower bound for the amount of knowledge it contains. Subsequent works explore training a series of vectors as prompts to guide PLMs towards more accurate predictions. However, these methods compromise the readability of the prompts. We cannot directly understand these prompts from their literal meaning, making it difficult to verify whether they are correct. Consequently, the credibility of probing results derived from these prompts is diminished. To address the issue, we propose a novel method called ReadPrompt, which aims to identify meaningful sentences to serve as prompts. Experiments show that ReadPrompt achieves state-of-the-art performance on the current knowledge probing benchmark. Moreover, since the prompt is readable, we discovered a misalignment between constructed prompts and knowledge, which is also present in current prompting methods verified by an attack experiment. We claim that the probing outcomes of the current prompting methods are unreliable that overestimate the knowledge contained within PLMs.
Submission Number: 1411
Loading