Correlation Between Attention Heads of BERT

Published: 2022, Last Modified: 06 Jan 2026ICEIC 2022EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Recently, as deep learning achieves tremendous success in a variety of application domains, natural language processing adopting deep learning also has become very widespread in research. The performance of typical such models like Transformer, BERT and GPT models is quite excellent and near human performance. However, due to its complicate structure of operations such as self-attention, the role of internal outputs between layers or the relationship between latent vectors has been seldomly studied compared to CNNs. In this work, we calculate the correlation between the output of multiple self-attention heads in each layer of a pre-trained BERT model and investigate if there exist redundantly trained ones, that is, we test if the output latent vectors of an attention head can be linearly transformed to those of the other head. By experiments, we show that there are heads with high correlation and the result implies that such examination on the correlation between heads may help us to optimize the structure of BERT.
Loading