Efficient data selection employing Semantic Similarity-based Graph Structures for model training

Published: 20 Jun 2023, Last Modified: 11 Oct 2023SODS 2023 PosterEveryoneRevisionsBibTeX
Keywords: sampling, optimization, discrete space, graphs, semantic similarity, speech, nlp, label recovery, ordinal regression
TL;DR: We propose SeSaME, an effective and efficient data selection mechanism for model training using semantic similarity-based graph structures on textual data to solve an ordinal regression model performance task.
Abstract: Recent developments in natural language processing (NLP) have highlighted the need for substantial amounts of data for models to capture textual information accurately. This raises concerns regarding the computational resources and time required for training such models. This paper introduces \textbf{SE}mantics for data \textbf{SA}liency in \textbf{M}odel performance \textbf{E}stimation (\textbf{SeSaME}). It is an efficient data sampling mechanism solely based on textual information without passing the data through a compute-heavy model or other intensive pre-processing transformations. The application of this approach is demonstrated in the use case of low-resource automated speech recognition (ASR) models, which excessively rely on text-to-speech (TTS) calls when using augmented data. SeSaME learns to categorize new incoming data points into speech recognition difficulty buckets by employing semantic similarity-based graph structures and discrete ASR information from homophilous neighbourhoods through message passing. The results indicate reliable projections of ASR performance, with a $93\%$ accuracy increase when using the proposed method compared to random predictions, bringing non-trivial information on the impact of textual representations in speech models. Furthermore, a series of experiments show both the benefits and challenges of using the ASR information on incoming data to fine-tune the model. We report a $7\%$ drop in validation loss compared to random sampling, $7\%$ WER drop with non-local aggregation when evaluating against a highly difficult dataset, and $1.8\%$ WER drop with local aggregation and high semantic similarity between datasets.
Submission Number: 22
Loading