Parameter-Efficient Abstractive Question Answering over Tables and over TextDownload PDF

Anonymous

16 Nov 2021 (modified: 05 May 2023)ACL ARR 2021 November Blind SubmissionReaders: Everyone
Abstract: A long-term ambition of information seeking question answering (QA) systems is to reason over multi-modal contexts and generatenatural answers to user queries. Today, memory intensive pre-trained language models are adapted to downstream tasks such as QA by fine-tuning the model on QA data in a specific modality like unstructured text or structured tables. To avoid training such memory-hungry models and utilizing a uniform architecture for each modality, parameter-efficient transfer learning techniques such as adapters add and train small task-specific bottle-neck layers between transformer layers. However, modality-specific adapter layers infused in a pre-trained transformer also require uniformity in the input sequence, which contradicts with existing work that trains structure-specific layers on multi-modal data. In this work, we study parameter-efficient abstractive QA in encoder-decoder models over structured tabular data and unstructured textual data using only 1.5% additional parameters for each modality. We retain table structure information by a hierarchy preserving transformation of complex hierarchical tables to 1-dimensional sequences, thus maintaining uniformity in the model input. We also ablate over adapter layers in both encoder and decoder modules and study the efficiency-performance trade-off and demonstrate that reducing additional trainable parameters down to 0.7%–1.0% leads to comparable results. Our models outperform current state-of-the-art models on tabular QA datasets such as Tablesum and FeTaQA and achieve comparable performance on a text QA dataset such as NarrativeQA using significantly less trainable parameters.
0 Replies

Loading