Abstract: Supervised deep learning has significantly improved bandwidth extension (BWE), whereas the emergence of self-supervised learning (SSL) has prompted the combined exploration of SSL and BWE. Although SSL-based deep learning models have shown to produce better representations than their supervised counterparts when trained naively, their effectiveness diminishes in when the model learns different tasks sequentially. To address this problem, we propose a continual learning framework called CLASS, which incorporates continual learning (CL) and self-supervised pretraining (SSP) to improve BWE performance. The framework integrates SSP and BWE fine-tuning tasks with CL approaches, enabling the model to retain its representation knowledge while adapting to BWE as a target task. We employ the CL fine-tuning loss or exponential moving average algorithm to gradually update model parameters and learn to resemble wideband from narrowband signals without losing information from a previous task. In addition, we present the new continual loss with extended version of elastic weight consolidation by updating fisher information matrix for better BWE performance. Our experimental results demonstrate that the proposed method outperforms the baseline approach on the TIMIT dataset. Furthermore, we explore the impact of different hyperparameter settings, contributing to a more comprehensive understanding of the performance of the proposed framework.
Loading