Speech-to-LaTeX: New Models and Datasets for Converting Spoken Equations and Sentences

ICLR 2026 Conference Submission537 Authors

01 Sept 2025 (modified: 21 Nov 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: ASR, multimodal LLM, speech processing, TTS, datasets
Abstract: Conversion of spoken mathematical expressions is a challenging task that involves transcribing speech into a strictly structured symbolic representation while addressing the ambiguity inherent in the pronunciation of equations. Although significant progress has been achieved in automatic speech recognition (ASR) and language models (LM), the problem of converting spoken mathematics into LaTeX remains underexplored. This task directly applies to educational and research domains, such as lecture transcription or note creation. Based on ASR post-correction, prior work requires 2 transcriptions, focuses only on isolated equations, has a limited test set, and provides neither training data nor multilingual coverage. To address these issues, we present the first fully open-source large-scale dataset, comprising over 66,000 human-annotated audio samples of mathematical equations and sentences in English and Russian, drawn from diverse scientific domains. In addition to the ASR post-correction models and few-shot prompting, we apply audio language models, demonstrating comparable character error rate (CER) results on the MathSpeech benchmark (28\% vs. 30\%) for the equations conversion. In contrast, on the proposed S2L-equations benchmark, our models outperform the MathSpeech model by a substantial margin of more than 36 percentage points, even after accounting for LaTeX formatting artifacts (27\% vs. 64\%). We establish the first benchmark for mathematical sentence recognition (S2L-sentences) and achieve an equation CER of 40\%. This work lays the groundwork for future advances in multimodal AI, with a particular focus on mathematical content recognition.
Supplementary Material: zip
Primary Area: datasets and benchmarks
Submission Number: 537
Loading