Abstract: Contemporary theories model language processing as integrating both top-down expectations and bottom-up inputs.
One major prediction of such models is that the quality of the bottom-up inputs modulates ease of processing---noisy inputs should lead to difficult and effortful comprehension.
We test this prediction in the domain of reading.
First, we propose an information-theoretic operationalization for the "quality" of bottom-up information as the mutual information (MI) between visual information and word identity.
We formalize this prediction in a mathematical model of reading as Bayesian update.
Second, we test our operationalization by comparing participants' reading times in conditions where words' information quality has been reduced, either by occluding their top or bottom half, with full words.
We collect data in English and Chinese.
We then use multimodal language models to estimate the mutual information between visual inputs and words.
We use these data to estimate the specific effect of reduced information quality on reading times.
Finally, we compare how information is distributed across visual forms.
In English and Chinese, the upper half contains more information about word identity than the lower half.
However, the asymmetry is more pronounced in English, a pattern which is reflected in the reading times.
Paper Type: Long
Research Area: Linguistic theories, Cognitive Modeling and Psycholinguistics
Research Area Keywords: linguistic theories; cognitive modeling; computational psycholinguistics; image text matching
Contribution Types: Data resources, Data analysis, Theory
Languages Studied: Chinese, English
Submission Number: 7948
Loading