Sparse Autoencoders Can Capture Language-Specific Concepts Across Diverse Languages
By: Lyzander Marciano Andrylie , Inaya Rahmanisa , Mahardika Krisna Ihsani and more
Potential Business Impact:
Finds language-specific parts inside AI brains.
Understanding the multilingual mechanisms of large language models (LLMs) provides insight into how they process different languages, yet this remains challenging. Existing studies often focus on individual neurons, but their polysemantic nature makes it difficult to isolate language-specific units from cross-lingual representations. To address this, we explore sparse autoencoders (SAEs) for their ability to learn monosemantic features that represent concrete and abstract concepts across languages in LLMs. While some of these features are language-independent, the presence of language-specific features remains underexplored. In this work, we introduce SAE-LAPE, a method based on feature activation probability, to identify language-specific features within the feed-forward network. We find that many such features predominantly appear in the middle to final layers of the model and are interpretable. These features influence the model's multilingual performance and language output and can be used for language identification with performance comparable to fastText along with more interpretability. Our code is available at https://github.com/LyzanderAndrylie/language-specific-features
Similar Papers
Unveiling Language-Specific Features in Large Language Models via Sparse Autoencoders
Computation and Language
Makes computers speak only one language at a time.
Evaluating Sparse Autoencoders for Monosemantic Representation
Machine Learning (CS)
Makes AI understand ideas more clearly.
Sparse Autoencoders Learn Monosemantic Features in Vision-Language Models
CV and Pattern Recognition
Helps AI understand pictures better, controlling its answers.