Modeling Bilingual Sentence Processing: Evaluating RNN and Transformer Architectures for Cross-Language Structural Priming

ACL ARR 2024 June Submission3401 Authors

16 Jun 2024 (modified: 02 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: This study evaluates the performance of Recurrent Neural Network (RNN) and Transformer models in replicating cross-language structural priming, a key indicator of abstract grammatical representations in human language processing. Focusing on Chinese-English priming, which involves two typologically distinct languages, we examine how these models handle the robust phenomenon of structural priming, where exposure to a particular sentence structure increases the likelihood of selecting a similar structure subsequently. Additionally, we use large language models (LLMs) to measure the crosslingual structural priming effect. Our findings indicate that Transformers outperform RNNs in generating primed sentence structures, challenging the conventional belief that human sentence processing primarily involves recurrent and immediate processing, and suggesting a role for cue-based retrieval mechanisms. In general, this work contributes to our understanding of how computational models may reflect human cognitive processes in multilingual contexts.
Paper Type: Long
Research Area: Linguistic theories, Cognitive Modeling and Psycholinguistics
Research Area Keywords: linguistic theories; cognitive modeling; computational psycholinguistics
Contribution Types: Model analysis & interpretability, Theory
Languages Studied: Chinese, English
Submission Number: 3401
Loading