aCAT: Automatically Choosing Anchor Tokens in prompt for Natural Language Understanding

ACL ARR 2024 June Submission1554 Authors

14 Jun 2024 (modified: 02 Aug 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: P-tuning has demonstrated that anchor tokens are beneficial for improving the performance of downstream tasks. However, selecting anchor tokens manually may result in subjective or suboptimal results. In this paper, we present aCat to choose anchor tokens automatically. Following the framework of the soft-hard prompt paradigm, aCat achieves the automatic construction of prompt templates. Experiments conducted on natural language understanding benchmarks demonstrate the effectiveness of our proposed method. On the seven datasets of SuperGlue, the proposed method has higher accuracy than the P-tuning, and the average accuracy is higher than P-tuning V2.
Paper Type: Short
Research Area: Machine Learning for NLP
Research Area Keywords: soft-hard prompt,reinforcement learning
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 1554
Loading