TAML-Adapter: Enhancing Adapter Tuning Through Task-Agnostic Meta-Learning for Low-Resource Automatic Speech Recognition
Abstract: Parameter-efficient fine-tuning of pre-trained multilingual speech models can significantly enhance the speech recognition performance of target languages. However, traditional parameter-efficient fine-tuning methods, such as adapter tuning, often face challenges related to random initialization. This can lead to suboptimal performance when adapting to languages with limited resources. To address this issue, this letter introduces TAML-Adapter, which utilizes the Task-Agnostic Meta-Learning algorithm to initialize the parameters of the adapters before fine-tuning in target low-resource languages. Comprehensive experiments conducted on the Common Voice and Fleurs datasets highlight the superior performance of TAML-Adapter in five languages with limited resources. In addition, the TAML-Adapter demonstrates superior generalizability and extensibility compared to similar competing methods.
Loading