Abstract: Existing CNN-based speech separation models face local receptive field limitations and cannot effectively capture long time dependencies. Although LSTM and Transformer-based speech separation models can avoid this problem, their high complexity causes them to face the challenge of computational resources and inference efficiency when dealing with long audio. To address this challenge, we introduce an innovative speech separation method called SPMamba. This model builds upon the robust TF-GridNet architecture, replacing its traditional BLSTM modules with bidirectional Mamba modules. These modules effectively model the spatiotemporal relationships between the time and frequency dimensions, allowing SPMamba to capture long-range dependencies with linear computational complexity. Specifically, the bidirectional processing within the Mamba modules enables the model to utilize both past and future contextual information, thereby enhancing separation performance. Extensive experiments were conducted on public datasets, including the WSJ0-2Mix and WHAM! and Libri2Mix, as well as the newly constructed Echo2Mix dataset, demonstrated that SPMamba achieved superior results to previous state-of-the-art (SOTA) models with reduced computational complexity. These findings highlight the effectiveness of SPMamba in addressing the intricate challenges of speech separation in complex environments. The source code for SPMamba is publicly accessible at https://anonymous.4open.science/r/SPMamba-ICME/.
External IDs:dblp:conf/icmcs/LiCYH25
Loading