Abstract: One of the central skills that language learners need to practice is speaking the language. Currently, students in school do not get enough speaking opportunities and lack conversational practice. The recent advances in speech technology and natural language processing allow the creation of novel tools to practice their speaking skills. In this work, we tackle the first component of such a pipeline, namely, the automated speech recognition module (ASR). State-of-the-art models are often trained on adult read-aloud data by native speakers and do not transfer well to young language learners' speech. Second, most ASR systems contain a powerful language model, which smooths out mistakes made by the speakers. To give corrective feedback, which is a crucial part of language learning, the ASR systems in our setting need to preserve the mistakes made by the language learners. In this work, we build an ASR system that satisfies these requirements: it works on spontaneous speech by young language learners and preserves their mistakes. For this, we collected a corpus containing around 85 hours of English audio spoken by Swiss learners from grades 4 to 6 on different language learning tasks, which we used to train an ASR model. Our experiments show that our model benefits from direct fine-tuning of children's voices and has a much higher error preservation rate.
Paper Type: long
Research Area: Speech recognition, text-to-speech and spoken language understanding
Contribution Types: Data resources, Data analysis
Languages Studied: English
Preprint Status: There is no non-anonymous preprint and we do not intend to release one.
A1: yes
A1 Elaboration For Yes Or No: The (unnumbered) section is called "Limitations"
A2: yes
A2 Elaboration For Yes Or No: Risks are discussed in the (unnumbered) section “Ethical Considerations”
A3: yes
A3 Elaboration For Yes Or No: Abstract, Section 1
B: yes
B1: yes
B1 Elaboration For Yes Or No: 4.2, 4.3 among others
B2: yes
B2 Elaboration For Yes Or No: 3.4.2
B3: yes
B3 Elaboration For Yes Or No: 3.4.2, 4.2, 4.3
B4: yes
B4 Elaboration For Yes Or No: 3.4.2
B5: yes
B5 Elaboration For Yes Or No: 3.3
B6: yes
B6 Elaboration For Yes Or No: 3.3, 3.4
C: yes
C1: yes
C1 Elaboration For Yes Or No: 4.3
C2: yes
C2 Elaboration For Yes Or No: Section 4; note: no hyperparameter search was performed at this stage; this is also stated in the Limitations section
C3: yes
C3 Elaboration For Yes Or No: Section 4
C4: yes
C4 Elaboration For Yes Or No: 4.1
D: yes
D1: yes
D1 Elaboration For Yes Or No: In 3.1, we have a high-level description of the speaking tasks. The consent modalities are also described in 3.1. The annotation guidelines are submitted as supplementary material under “Data”. Upon acceptance, we will share the full task materials and consent forms, but refrain from doing so now because this is a large body of materials whose anonymisation would be extremely time-consuming.
D2: yes
D2 Elaboration For Yes Or No: 3.1
D3: yes
D3 Elaboration For Yes Or No: 3.1 and Ethical Considerations
D4: yes
D4 Elaboration For Yes Or No: 3.1 and Ethical Considerations
D5: yes
D5 Elaboration For Yes Or No: 3.1
E: yes
E1: yes
E1 Elaboration For Yes Or No: ChatGPT was used to speed up creation of some of the plots; see section “Use of AI Assistants” at the very end. No AI Assistants were used for writing texts for the paper.
0 Replies
Loading