Blockwise Self-Attention for Long Document Understanding

Sep 25, 2019 Blind Submission readers: everyone Show Bibtex
  • Keywords: BERT, Transformer
  • TL;DR: We present BlockBERT, a lightweight and efficient BERT model that is designed to better modeling long-distance dependencies.
  • Abstract: We present BlockBERT, a lightweight and efficient BERT model that is designed to better modeling long-distance dependencies. Our model extends BERT by introducing sparse block structures into the attention matrix to reduce both memory consumption and training time, which also enables attention heads to capture either short- or long-range contextual information. We conduct experiments on several benchmark question answering datasets with various paragraph lengths. Results show that BlockBERT uses 18.7-36.1% less memory and reduces the training time by 12.0-25.1%, while having comparable and sometimes better prediction accuracy, compared to an advanced BERT-based model, RoBERTa.
  • Original Pdf:  pdf
0 Replies

Loading