Attention or Convolution: Transformer Encoders in Audio Language Models for Inference Efficiency

Published: 01 Jan 2024, Last Modified: 17 Dec 2024ICASSP Workshops 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In this paper, we show that a simple audio language model can achieve comparable inference efficiency to more complicated pre-trained models with speech transformer encoders. These speech transformers rely on mixing convolutional modules with self-attention modules. They achieve state-of- the-art performance on ASR with top efficiency. We first show that employing these speech transformers as an encoder significantly improves the efficiency of audio language models as well. However, our study shows that we can achieve comparable efficiency with advanced self-attention solely. We demonstrate that this simpler approach is particularly beneficial with a low-bit weight quantization technique of a neural network to improve efficiency. We hypothesize that it prevents propagating the errors between different quantized modules compared to recent speech transformers mixing quantized convolution and the quantized self-attention modules. Our study suggests that we could pay attention to the architecture of audio language language to improve their inference efficiency.
Loading