Ahead-of-Time P-TuningDownload PDF

Anonymous

03 Sept 2022 (modified: 05 May 2023)ACL ARR 2022 September Blind SubmissionReaders: Everyone
Abstract: This paper proposes a simple reparametrization for Prefix-Tuning – AoT P-Tuning, for which we embed prefixes to the hidden state before evaluating the attention mechanism, thus saving much time during the evaluation.We experimented with the proposed method on GLUE Benchmarking Datasets and observed that AoT P-tuning performed on on-pair or better than P-Tuning v2 while up to $1.3\times$ times faster during the inference.
Paper Type: long
0 Replies

Loading