CC-Tuning: A Cross-Lingual Connection Mechanism for Improving Joint Multilingual Supervised Fine-Tuning
Abstract: Current large language models (LLMs) often exhibit imbalanced multilingual capabilities due to their English-centric training corpora.
To address this, existing fine-tuning approaches operating at the data-level (e.g., through data augmentation or distillation) typically introduce implicit cross-lingual alignment, overlooking the potential for more profound, latent-level cross-lingual interactions.
In this work, we propose CC-Tuning, a novel multilingual fine-tuning paradigm that explicitly establishes a cross-lingual connection mechanism at the latent level.
During training, CC-Tuning fuses the feed forward activations from both English and non-English inputs, enabling the model to benefit from both linguistic resources.
This process is facilitated with a trainable Decision Maker that identifies beneficial activations.
Furthermore, during inference, a Transform Matrix is utilized to simulate the cross-lingual connection under monolingual setting through representation transformation.
Our experiments on six benchmarks covering 22 languages show that CC-Tuning outperforms vanilla SFT and offers a strong latent-level alternative to data-level augmentation methods. Further analysis also highlights the practicality of CC-Tuning and the potential of latent-level cross-lingual interactions in advancing the multilingual performance of LLMs.
Paper Type: Long
Research Area: Multilingualism and Cross-Lingual NLP
Research Area Keywords: multilingualism, cross-lingual transfer, multilingual representations
Contribution Types: NLP engineering experiment, Approaches to low-resource settings
Languages Studied: Arabic, Bengali, German, Greek, English, Spanish, Basque, French, Hindi, Indonesian, Japanese, Korean, Portuguese, Russian, Swahili, Telugu, Thai, Turkish, Urdu, Vietnamese, Yoruba, Chinese
Submission Number: 3810
Loading