SPE: Symmetrical Prompt Enhancement for Factual Knowledge RetrievalDownload PDF

Anonymous

16 Oct 2021 (modified: 05 May 2023)ACL ARR 2021 October Blind SubmissionReaders: Everyone
Abstract: Pretrained language models (PLMs) have been shown to accumulate factual knowledge from their unsupervised pretraining procedures (Petroni et al., 2019). Prompting is an effective way to query such knowledge from PLMs. Recently, continuous prompt methods have been shown to have a larger potential than discrete prompt methods in generating effective queries (Liu et al., 2021a). However, these methods do not consider symmetry of the task. In this work, we propose Symmetrical Prompt Enhancement (SPE), a continuous prompt-based method for fact retrieval that leverages the symmetry of the task. Our results on LAMA, a popular fact retrieval dataset, show significant improvement of SPE over previous prompt methods.
0 Replies

Loading