Adapter Tuning With Task-Aware Attention MechanismDownload PDFOpen Website

Published: 01 Jan 2023, Last Modified: 21 Feb 2024ICASSP 2023Readers: Everyone
Abstract: Adapter-tuning inserts simple feed-forward layers (adapters) in pre-trained language models (PLMs) and just tunes the adapters when transferring to downstream tasks, having become the state-of-the-art parameter-efficient tuning (PET) strategy. Although the adapters aim to learn task-related representations, their inputs are still obtained from the task-independent and frozen multi-head attention (MHA) modules, leading to insufficient utilization of contextual information for various downstream tasks. Intuitively, MHA should be task-dependent and could attend to different contexts in different downstream tasks. Thus, this paper proposes the task-aware attention mechanism (TAM) to enhance adapter tuning. Specifically, we first utilize the task-dependent adapter to generate token-wise task embedding. Then, we apply the task embedding to influence MHA which task-dependently aggregates the contextual information. Experimental results on a wide range of natural language understanding and generation tasks demonstrate the effectiveness of our method. Furthermore, extensive analyses demonstrate that the generated task embedding corresponds with the difficulty of tasks.
0 Replies

Loading