Keywords: Large Language Model, Multilingual, Machine Translation, Interpretability and Analysis of Models for NLP, Multilingualism and Cross-Lingual NLP
TL;DR: This study makes a key contribution by introducing a novel systematic framework to interpret the translation mechanisms of LLMs from a computational components perspective, an area previously unexplored.
Abstract: While large language models (LLMs) demonstrate remarkable success in multilingual translation, their internal core translation mechanisms, even at the fundamental word level, remain insufficiently understood.
To address this critical gap, this work introduces a systematic framework for interpreting the mechanism behind LLM translation from the perspective of computational components.
This paper first proposes subspace-intervened path patching for precise, fine-grained causal analysis, enabling the detection of components crucial to translation tasks and subsequently characterizing their behavioral patterns in human-interpretable terms.
Comprehensive experiments reveal that translation is predominantly driven by a sparse subset of components: specialized attention heads serve critical roles in extracting source language, translation indicators, and positional features, which are then integrated and processed by specific multi-layer perceptrons (MLPs) into intermediary English-centric latent representations before ultimately yielding the final translation.
The significance of these findings is underscored by the empirical demonstration that targeted fine-tuning a minimal parameter subset (<5%) enhances translation performance while preserving general capabilities. This result further indicates that these crucial components generalize effectively to sentence-level translation and are instrumental in elucidating more intricate translation tasks.
Primary Area: Social and economic aspects of machine learning (e.g., fairness, interpretability, human-AI interaction, privacy, safety, strategic behavior)
Submission Number: 20165
Loading