High-Dimensional Interlingual Representations of Large Language Models
By: Bryan Wilie , Samuel Cahyawijaya , Junxian He and more
Potential Business Impact:
Makes computers understand different languages better.
Large language models (LLMs) trained on massive multilingual datasets hint at the formation of interlingual constructs--a shared subspace in the representation space. However, evidence regarding this phenomenon is mixed, leaving it unclear whether these models truly develop unified interlingual representations, or present a partially aligned constructs. We explore 31 diverse languages varying on their resource-levels, typologies, and geographical regions; and find that multilingual LLMs exhibit inconsistent cross-lingual alignments. To address this, we propose an interlingual representation framework identifying both the shared interlingual semantic subspace and fragmented components, existed due to representational limitations. We introduce Interlingual Local Overlap (ILO) score to quantify interlingual alignment by comparing the local neighborhood structures of high-dimensional representations. We utilize ILO to investigate the impact of single-language fine-tuning on the interlingual representations in multilingual LLMs. Our results indicate that training exclusively on a single language disrupts the alignment in early layers, while freezing these layers preserves the alignment of interlingual representations, leading to improved cross-lingual generalization. These results validate our framework and metric for evaluating interlingual representation, and further underscore that interlingual alignment is crucial for scalable multilingual learning.
Similar Papers
Language Surgery in Multilingual Large Language Models
Computation and Language
Makes computers switch languages without losing meaning.
Can you map it to English? The Role of Cross-Lingual Alignment in Multilingual Performance of LLMs
Computation and Language
Helps computers understand many languages without extra training.
Language Lives in Sparse Dimensions: Toward Interpretable and Efficient Multilingual Control for Large Language Models
Computation and Language
Changes computer language without losing meaning.