Syllable Tokenization Does Not Improve Phonological Awareness in Large Language Models

ACL ARR 2025 July Submission837 Authors

28 Jul 2025 (modified: 06 Sept 2025)ACL ARR 2025 July SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large Language Models often struggle with downstream task involving phonological awareness, despite increasing performance on natural language understanding benchmarks. One leading hypothesis for this discrepancy in performance is that tokenization along non-phonologically informed boundaries results in an inability to acquire phonological information from orthographic input, which makes up the majority of text-based Large Language Model training data. In this paper, we investigate this hypothesis by pretraining a Large Language Model on a Byte Pair Encoding tokenized corpus and an identical model on a syllable tokenized corpus. We compare their performance on syllable segmentation task, and a word segmentation task, but find no significant improvement from syllable tokenization on either task.
Paper Type: Short
Research Area: Phonology, Morphology and Word Segmentation
Research Area Keywords: Phonology, Morphology, and Word Segmentation,Resources and Evaluation
Contribution Types: NLP engineering experiment, Publicly available software and/or pre-trained models
Languages Studied: English
Reassignment Request Area Chair: This is not a resubmission
Reassignment Request Reviewers: This is not a resubmission
A1 Limitations Section: This paper has a limitations section.
A2 Potential Risks: No
A2 Elaboration: We don't believe that our work poses any potential risk. There are limitations that might arise from misinterpreting the results, but none that constitute risk.
B Use Or Create Scientific Artifacts: Yes
B1 Cite Creators Of Artifacts: Yes
B1 Elaboration: 3.1, 4.1
B2 Discuss The License For Artifacts: No
B2 Elaboration: All tools used are available for use under the MIT License.
B3 Artifact Use Consistent With Intended Use: No
B3 Elaboration: All tools are used with the restrictions of their licenses (MIT, Apache). All artifacts created have not yet been officially released.
B4 Data Contains Personally Identifying Info Or Offensive Content: No
B4 Elaboration: There is no collected data from persons that was not already part of an anonymized dataset.
B5 Documentation Of Artifacts: Yes
B5 Elaboration: 1
B6 Statistics For Data: Yes
B6 Elaboration: 5.1.2,5.2.1
C Computational Experiments: Yes
C1 Model Size And Budget: Yes
C1 Elaboration: 3.4
C2 Experimental Setup And Hyperparameters: Yes
C2 Elaboration: 3.4
C3 Descriptive Statistics: Yes
C3 Elaboration: 5
C4 Parameters For Packages: Yes
C4 Elaboration: 3.3
D Human Subjects Including Annotators: No
D1 Instructions Given To Participants: N/A
D2 Recruitment And Payment: N/A
D3 Data Consent: N/A
D4 Ethics Review Board Approval: N/A
D5 Characteristics Of Annotators: N/A
E Ai Assistants In Research Or Writing: Yes
E1 Information About Use Of Ai Assistants: No
E1 Elaboration: AI was not used outside of general syntax lookup for LaTeX and Python.
Author Submission Checklist: yes
Submission Number: 837
Loading