Score: 2

Sparse Autoencoders Can Capture Language-Specific Concepts Across Diverse Languages

Published: July 15, 2025 | arXiv ID: 2507.11230v2

By: Lyzander Marciano Andrylie , Inaya Rahmanisa , Mahardika Krisna Ihsani and more

Potential Business Impact:

Finds language-specific parts inside AI brains.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Understanding the multilingual mechanisms of large language models (LLMs) provides insight into how they process different languages, yet this remains challenging. Existing studies often focus on individual neurons, but their polysemantic nature makes it difficult to isolate language-specific units from cross-lingual representations. To address this, we explore sparse autoencoders (SAEs) for their ability to learn monosemantic features that represent concrete and abstract concepts across languages in LLMs. While some of these features are language-independent, the presence of language-specific features remains underexplored. In this work, we introduce SAE-LAPE, a method based on feature activation probability, to identify language-specific features within the feed-forward network. We find that many such features predominantly appear in the middle to final layers of the model and are interpretable. These features influence the model's multilingual performance and language output and can be used for language identification with performance comparable to fastText along with more interpretability. Our code is available at https://github.com/LyzanderAndrylie/language-specific-features

Country of Origin
šŸ‡¦šŸ‡Ŗ šŸ‡®šŸ‡© Indonesia, United Arab Emirates

Repos / Data Links

Page Count
167 pages

Category
Computer Science:
Computation and Language