Abstract: Incorporating biasing words obtained through contextual knowledge is paramount in automatic speech recognition (ASR) applications. This paper proposes an innovative method for achieving end-to-end contextual ASR using graph neural network (GNN) encodings based on the tree-constrained pointer generator method. GNN node encodings facilitate lookahead for future word pieces in the process of ASR decoding at each tree node by incorporating information about all word pieces on the tree branches rooted from it. This results in a more precise prediction of the generation probability of the biasing words. The study explores three GNN encoding techniques: namely the tree recursive neural network (Tree-RNN), the graph convolutional network (GCN), and GraphSAGE, along with different combinations of the complementary GCN and GraphSAGE structures. The performance of the systems was evaluated using both Librispeech and the AMI corpus with a visual-grounded contextual ASR pipeline. The findings indicate that using GNN encodings achieved consistent and significant reductions in word error rate (WER), particularly for words that are rare or have not been seen during the training process. Notably, on LibriSpeech test sets, the combined GNN proposed in this paper achieved a 20% relative rare word error rate reduction compared to Tree-RNN, 30%–40% compared to standard TCPGen and 60% compared to standard ASR systems without TCPGen.
External IDs:doi:10.1109/taslp.2024.3389645
Loading