KIMERA: Injecting Domain Knowledge into Vacant Transformer HeadsDownload PDF

29 Sept 2021, 00:31 (modified: 04 Oct 2021, 15:17)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Transformer, Domain Adaption, Medical, Clinical, Attention
Abstract: Training transformer language models requires vast amounts of text and computational resources. This drastically limits the usage of these models in niche domains for which they are not optimized, or where domain-specific training data is scarce. We focus here on the clinical domain because of its limited access to training data in common tasks, while structured ontological data is often readily available. Recent observations in model compression of transformer models show optimization potential in improving the representation capacity of attention heads. We propose KIMERA (Knowledge Injection via Mask Enforced Retraining of Attention) for detecting, retraining and instilling attention heads with complementary structured domain knowledge. Our novel multi-task training scheme effectively identifies and targets individual attention heads that are least useful for a given downstream task and optimizes their representation with information from structured data. Due to its multi-task nature KIMERA generalizes well, thereby building the basis for an efficient fine-tuning. KIMERA achieves significant performance boosts on seven datasets in the medical domain in Information Retrieval and Clinical Outcome Prediction settings. We apply KIMERA to BERT-base to evaluate the extent of the domain transfer and also improve on the already strong results of BioBERT in the clinical domain.
One-sentence Summary: Using a selective attention approach we identify the least important attention heads of a transformer network and retrain them for domain adaptation.
11 Replies

Loading