Mixup-Based Knowledge Distillation with Causal Intervention for Multi-Task Speech Classification

Published: 27 Oct 2023, Last Modified: 05 Dec 2023CRL@NeurIPS 2023 PosterEveryoneRevisionsBibTeX
Keywords: mixup, knowledge distillation, causal intervention, hierarchical multi-task learning, age group hierarchy
TL;DR: Using mixup-based knowledge distillation and hierarchical multi-task learning with causal inference, this study addresses ambiguous age boundaries in speech, significantly improving multitask speech classification on Korean open-set corpora.
Abstract: Speech classification is an essential yet challenging subtask of multitask classification, which determines the gender and age groups of speakers. Existing methods face challenges while extracting the correct features indicative of some age groups that have several ambiguities of age perception in speech. Furthermore, the methods cannot fully understand the causal inferences between speech representation and multilabel spaces. In this study, the causes of ambiguous age group boundaries are attributed to the considerable variability in speech, even within the same age group. Additionally, features that indicate speech from the 20’s can be shared by some age groups in their 30’s. Therefore, a two-step approach to (1) mixup-based knowledge distillation to remove biased knowledge with causal intervention and (2) hierarchical multi-task learning with causal inference for the age group hierarchy to utilize the shared information of label dependencies is proposed. Empirical experiments on Korean open-set speech corpora demonstrate that the proposed methods yield a significant performance boost in multitask speech classification.
Submission Number: 18
Loading