GroupBERT: Enhanced Transformer Architecture with Efficient Grouped StructuresDownload PDF

Published: 28 Jan 2022, Last Modified: 13 Feb 2023ICLR 2022 SubmittedReaders: Everyone
Keywords: Transformer, BERT, self-supervision, compute efficiency, sparsity, convolution, natural language processing
Abstract: Attention based language models have become a critical component in state-of-the-art natural language processing systems. However, these models have significant computational requirements, due to long training times, dense operations and large parameter count. In this work we demonstrate a set of modifications to the structure of a Transformer layer, producing a more efficient architecture. First, we rely on grouped transformations to reduce the computational cost of dense feed-forward layers, while preserving the expressivity of the model . Secondly, we add a grouped convolution module to complement the self-attention module, decoupling the learning of local and global interactions. We apply the resulting architecture to language representation learning and demonstrate its superior performance compared to BERT models of different scales. We further highlight its improved efficiency, both in terms of floating-point operations (FLOPs) and time-to-train.
One-sentence Summary: We present GroupBERT, proving more than 2x efficiency improvement over BERT, in terms of FLOPs and time-to-train
14 Replies

Loading