Normalization of Language Embeddings for Cross-Lingual AlignmentDownload PDF

Sep 29, 2021 (edited May 09, 2022)ICLR 2022 PosterReaders: Everyone
  • Keywords: cross-lingual word embeddings, natural language processing
  • Abstract: Learning a good transfer function to map the word vectors from two languages into a shared cross-lingual word vector space plays a crucial role in cross-lingual NLP. It is useful in translation tasks and important in allowing complex models built on a high-resource language like English to be directly applied on an aligned low resource language. While Procrustes and other techniques can align language models with some success, it has recently been identified that structural differences (for instance, due to differing word frequency) create different profiles for various monolingual embedding. When these profiles differ across languages, it correlates with how well languages can align and their performance on cross-lingual downstream tasks. In this work, we develop a very general language embedding normalization procedure, building and subsuming various previous approaches, which removes these structural profiles across languages without destroying their intrinsic meaning. We demonstrate that meaning is retained and alignment is improved on similarity, translation, and cross-language classification tasks. Our proposed normalization clearly outperforms all prior approaches like centering and vector normalization on each task and with each alignment approach.
  • One-sentence Summary: Our embedding normalization subsumes existing approaches and consistently improves cross-lingual alignment.
26 Replies