Keywords: Visual Tokens, Chinese Language Modeling, Low-resolution Visual Inputs, Hot start, Explainable NLP
Abstract: Large language models typically represent Chinese characters as discrete index-based tokens, largely ignoring their visual form. For logographic scripts, visual structure carries semantic and phonetic information, which may aid prediction. We investigate whether low-resolution visual inputs can serve as an alternative for character-level modeling. Instead of token IDs, our decoder receives grayscale images of individual characters, with resolutions as low as $8 \times 8$ pixels. Remarkably, these inputs achieve 39.2\% accuracy, comparable to the index-based baseline of 39.1\%. Such low-resource settings also exhibit a pronounced \emph{hot-start} effect: by 0.4\% of total training, accuracy reaches above 12\%, while index-based models lag at below 6\%. Overall, our results demonstrate that minimal visual structure can provide a robust and efficient signal for Chinese language modeling, offering an alternative perspective on character representation that complements traditional index-based approaches.
Paper Type: Long
Research Area: Speech Processing and Spoken Language Understanding
Research Area Keywords: language Modeling, pre-training, word embeddings, interpretability, probing, feature attribution, cross-modal pretraining, data-efficient training, chinese segmentation, robustness
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Approaches to low-resource settings
Languages Studied: Chinese
Submission Number: 3712
Loading