Abstract: In this paper, we investigate how large language models (LLMS) process non-English tokens within their layer representations—an open question despite significant advancements in the field. Using representation steering, specifically by adding a learned vector to a single model layer's activations, we demonstrate that steering a single model layer can notably enhance performance. Our analysis shows that this approach achieves results comparable to translation baselines and surpasses state-of-the-art prompt optimization methods. Additionally, we highlight how advanced techniques like supervised fine-tuning SFT and reinforcement learning from human feedback RLHF improve multilingual capabilities by altering representation spaces. We further illustrate how these methods align with our approach to reshaping LLMS layer representations.
Paper Type: Long
Research Area: Multilingualism and Cross-Lingual NLP
Research Area Keywords: code-switching, multilingualism, language contact, language change, linguistic variation, cross-lingual transfer, multilingual pre-training, less-resourced languages, endangered languages, indigenous languages, multilingual benchmarks.
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Approaches to low-resource settings, Data analysis
Languages Studied: English ,Spanish, French, Russian, German, Japanese, Chinese, Turkish, Arabic, Vietnamese, Hindi, Greek, Indonesian, Italian, Portuguese.
Submission Number: 1635
Loading